Mar 13 12:34:54.111811 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 13 12:34:54.654101 master-0 kubenswrapper[4143]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:34:54.654101 master-0 kubenswrapper[4143]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 13 12:34:54.654101 master-0 kubenswrapper[4143]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:34:54.654101 master-0 kubenswrapper[4143]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:34:54.654101 master-0 kubenswrapper[4143]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 13 12:34:54.654101 master-0 kubenswrapper[4143]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:34:54.657506 master-0 kubenswrapper[4143]: I0313 12:34:54.654880 4143 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 12:34:54.658277 master-0 kubenswrapper[4143]: W0313 12:34:54.658228 4143 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:34:54.658277 master-0 kubenswrapper[4143]: W0313 12:34:54.658250 4143 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:34:54.658277 master-0 kubenswrapper[4143]: W0313 12:34:54.658258 4143 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:34:54.658277 master-0 kubenswrapper[4143]: W0313 12:34:54.658263 4143 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:34:54.658277 master-0 kubenswrapper[4143]: W0313 12:34:54.658270 4143 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:34:54.658277 master-0 kubenswrapper[4143]: W0313 12:34:54.658277 4143 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:34:54.658277 master-0 kubenswrapper[4143]: W0313 12:34:54.658284 4143 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:34:54.658277 master-0 kubenswrapper[4143]: W0313 12:34:54.658290 4143 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:34:54.658277 master-0 kubenswrapper[4143]: W0313 12:34:54.658296 4143 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:34:54.658277 master-0 kubenswrapper[4143]: W0313 12:34:54.658302 4143 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:34:54.658277 master-0 kubenswrapper[4143]: W0313 12:34:54.658309 4143 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:34:54.658277 master-0 kubenswrapper[4143]: W0313 12:34:54.658315 4143 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:34:54.658277 master-0 kubenswrapper[4143]: W0313 12:34:54.658321 4143 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658328 4143 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658334 4143 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658340 4143 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658345 4143 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658350 4143 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658354 4143 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658359 4143 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658365 4143 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658369 4143 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658375 4143 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658380 4143 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658385 4143 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658389 4143 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658394 4143 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658400 4143 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658405 4143 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658410 4143 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658415 4143 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658421 4143 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:34:54.658956 master-0 kubenswrapper[4143]: W0313 12:34:54.658427 4143 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658434 4143 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658439 4143 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658445 4143 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658451 4143 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658458 4143 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658463 4143 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658468 4143 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658472 4143 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658477 4143 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658482 4143 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658487 4143 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658492 4143 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658497 4143 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658502 4143 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658506 4143 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658510 4143 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658515 4143 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658519 4143 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:34:54.659962 master-0 kubenswrapper[4143]: W0313 12:34:54.658524 4143 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658530 4143 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658536 4143 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658541 4143 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658546 4143 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658550 4143 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658555 4143 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658559 4143 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658564 4143 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658568 4143 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658572 4143 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658578 4143 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658583 4143 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658588 4143 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658592 4143 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658596 4143 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658601 4143 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658605 4143 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658611 4143 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658616 4143 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:34:54.660924 master-0 kubenswrapper[4143]: W0313 12:34:54.658620 4143 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659505 4143 flags.go:64] FLAG: --address="0.0.0.0" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659522 4143 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659532 4143 flags.go:64] FLAG: --anonymous-auth="true" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659538 4143 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659544 4143 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659549 4143 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659555 4143 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659561 4143 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659566 4143 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659571 4143 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659575 4143 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659580 4143 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659584 4143 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659588 4143 flags.go:64] FLAG: --cgroup-root="" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659592 4143 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659596 4143 flags.go:64] FLAG: --client-ca-file="" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659601 4143 flags.go:64] FLAG: --cloud-config="" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659605 4143 flags.go:64] FLAG: --cloud-provider="" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659609 4143 flags.go:64] FLAG: --cluster-dns="[]" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659613 4143 flags.go:64] FLAG: --cluster-domain="" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659618 4143 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659622 4143 flags.go:64] FLAG: --config-dir="" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659626 4143 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659630 4143 flags.go:64] FLAG: --container-log-max-files="5" Mar 13 12:34:54.661994 master-0 kubenswrapper[4143]: I0313 12:34:54.659636 4143 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659641 4143 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659646 4143 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659651 4143 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659655 4143 flags.go:64] FLAG: --contention-profiling="false" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659660 4143 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659664 4143 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659668 4143 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659674 4143 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659679 4143 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659684 4143 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659688 4143 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659693 4143 flags.go:64] FLAG: --enable-load-reader="false" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659697 4143 flags.go:64] FLAG: --enable-server="true" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659701 4143 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659707 4143 flags.go:64] FLAG: --event-burst="100" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659711 4143 flags.go:64] FLAG: --event-qps="50" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659715 4143 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659720 4143 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659724 4143 flags.go:64] FLAG: --eviction-hard="" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659729 4143 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659734 4143 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659738 4143 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659742 4143 flags.go:64] FLAG: --eviction-soft="" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659746 4143 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 13 12:34:54.663394 master-0 kubenswrapper[4143]: I0313 12:34:54.659751 4143 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659755 4143 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659760 4143 flags.go:64] FLAG: --experimental-mounter-path="" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659764 4143 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659768 4143 flags.go:64] FLAG: --fail-swap-on="true" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659772 4143 flags.go:64] FLAG: --feature-gates="" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659777 4143 flags.go:64] FLAG: --file-check-frequency="20s" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659781 4143 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659786 4143 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659790 4143 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659794 4143 flags.go:64] FLAG: --healthz-port="10248" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659798 4143 flags.go:64] FLAG: --help="false" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659803 4143 flags.go:64] FLAG: --hostname-override="" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659807 4143 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659812 4143 flags.go:64] FLAG: --http-check-frequency="20s" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659818 4143 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659822 4143 flags.go:64] FLAG: --image-credential-provider-config="" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659827 4143 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659831 4143 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659835 4143 flags.go:64] FLAG: --image-service-endpoint="" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659840 4143 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659844 4143 flags.go:64] FLAG: --kube-api-burst="100" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659849 4143 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659853 4143 flags.go:64] FLAG: --kube-api-qps="50" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659858 4143 flags.go:64] FLAG: --kube-reserved="" Mar 13 12:34:54.664991 master-0 kubenswrapper[4143]: I0313 12:34:54.659862 4143 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659866 4143 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659870 4143 flags.go:64] FLAG: --kubelet-cgroups="" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659874 4143 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659878 4143 flags.go:64] FLAG: --lock-file="" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659883 4143 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659887 4143 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659892 4143 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659899 4143 flags.go:64] FLAG: --log-json-split-stream="false" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659904 4143 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659910 4143 flags.go:64] FLAG: --log-text-split-stream="false" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659916 4143 flags.go:64] FLAG: --logging-format="text" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659921 4143 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659927 4143 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659932 4143 flags.go:64] FLAG: --manifest-url="" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659937 4143 flags.go:64] FLAG: --manifest-url-header="" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659944 4143 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659949 4143 flags.go:64] FLAG: --max-open-files="1000000" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659954 4143 flags.go:64] FLAG: --max-pods="110" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659960 4143 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659966 4143 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659971 4143 flags.go:64] FLAG: --memory-manager-policy="None" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659977 4143 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659983 4143 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 13 12:34:54.666572 master-0 kubenswrapper[4143]: I0313 12:34:54.659990 4143 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.659996 4143 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660008 4143 flags.go:64] FLAG: --node-status-max-images="50" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660013 4143 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660019 4143 flags.go:64] FLAG: --oom-score-adj="-999" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660023 4143 flags.go:64] FLAG: --pod-cidr="" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660028 4143 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660037 4143 flags.go:64] FLAG: --pod-manifest-path="" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660041 4143 flags.go:64] FLAG: --pod-max-pids="-1" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660046 4143 flags.go:64] FLAG: --pods-per-core="0" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660051 4143 flags.go:64] FLAG: --port="10250" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660056 4143 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660061 4143 flags.go:64] FLAG: --provider-id="" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660065 4143 flags.go:64] FLAG: --qos-reserved="" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660070 4143 flags.go:64] FLAG: --read-only-port="10255" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660075 4143 flags.go:64] FLAG: --register-node="true" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660080 4143 flags.go:64] FLAG: --register-schedulable="true" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660085 4143 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660093 4143 flags.go:64] FLAG: --registry-burst="10" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660097 4143 flags.go:64] FLAG: --registry-qps="5" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660102 4143 flags.go:64] FLAG: --reserved-cpus="" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660107 4143 flags.go:64] FLAG: --reserved-memory="" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660113 4143 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660118 4143 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 13 12:34:54.667970 master-0 kubenswrapper[4143]: I0313 12:34:54.660123 4143 flags.go:64] FLAG: --rotate-certificates="false" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.660128 4143 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.660132 4143 flags.go:64] FLAG: --runonce="false" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.660137 4143 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.660146 4143 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.660151 4143 flags.go:64] FLAG: --seccomp-default="false" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.660175 4143 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.660180 4143 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.660185 4143 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.660190 4143 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.660196 4143 flags.go:64] FLAG: --storage-driver-password="root" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.660201 4143 flags.go:64] FLAG: --storage-driver-secure="false" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.662707 4143 flags.go:64] FLAG: --storage-driver-table="stats" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.662722 4143 flags.go:64] FLAG: --storage-driver-user="root" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.662728 4143 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.662734 4143 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.662739 4143 flags.go:64] FLAG: --system-cgroups="" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.662745 4143 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.662753 4143 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.662759 4143 flags.go:64] FLAG: --tls-cert-file="" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.662764 4143 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.662770 4143 flags.go:64] FLAG: --tls-min-version="" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.662775 4143 flags.go:64] FLAG: --tls-private-key-file="" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.662780 4143 flags.go:64] FLAG: --topology-manager-policy="none" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.662784 4143 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 13 12:34:54.669526 master-0 kubenswrapper[4143]: I0313 12:34:54.662789 4143 flags.go:64] FLAG: --topology-manager-scope="container" Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: I0313 12:34:54.662794 4143 flags.go:64] FLAG: --v="2" Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: I0313 12:34:54.662801 4143 flags.go:64] FLAG: --version="false" Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: I0313 12:34:54.662808 4143 flags.go:64] FLAG: --vmodule="" Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: I0313 12:34:54.662814 4143 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: I0313 12:34:54.662820 4143 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.662961 4143 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.662968 4143 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.662973 4143 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.662980 4143 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.662985 4143 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.662991 4143 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.662996 4143 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.663002 4143 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.663008 4143 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.663013 4143 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.663018 4143 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.663022 4143 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.663027 4143 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.663032 4143 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.663036 4143 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:34:54.671069 master-0 kubenswrapper[4143]: W0313 12:34:54.663041 4143 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663046 4143 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663051 4143 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663056 4143 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663061 4143 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663065 4143 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663069 4143 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663073 4143 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663077 4143 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663081 4143 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663086 4143 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663090 4143 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663095 4143 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663099 4143 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663104 4143 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663109 4143 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663114 4143 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663118 4143 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663123 4143 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:34:54.672371 master-0 kubenswrapper[4143]: W0313 12:34:54.663128 4143 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663131 4143 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663139 4143 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663143 4143 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663148 4143 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663169 4143 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663174 4143 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663179 4143 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663183 4143 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663189 4143 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663195 4143 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663200 4143 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663205 4143 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663210 4143 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663215 4143 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663225 4143 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663229 4143 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663236 4143 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663242 4143 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:34:54.673524 master-0 kubenswrapper[4143]: W0313 12:34:54.663249 4143 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663255 4143 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663259 4143 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663265 4143 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663269 4143 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663274 4143 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663278 4143 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663283 4143 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663294 4143 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663304 4143 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663309 4143 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663314 4143 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663318 4143 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663323 4143 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663327 4143 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663331 4143 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663336 4143 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663341 4143 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:34:54.674562 master-0 kubenswrapper[4143]: W0313 12:34:54.663351 4143 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:34:54.675620 master-0 kubenswrapper[4143]: I0313 12:34:54.664615 4143 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:34:54.678021 master-0 kubenswrapper[4143]: I0313 12:34:54.677944 4143 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 13 12:34:54.678021 master-0 kubenswrapper[4143]: I0313 12:34:54.678001 4143 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678084 4143 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678094 4143 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678099 4143 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678105 4143 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678109 4143 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678114 4143 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678118 4143 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678122 4143 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678125 4143 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678130 4143 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678134 4143 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678141 4143 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678145 4143 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678149 4143 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678172 4143 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678179 4143 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678184 4143 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678188 4143 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678192 4143 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:34:54.678575 master-0 kubenswrapper[4143]: W0313 12:34:54.678196 4143 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678200 4143 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678204 4143 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678208 4143 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678211 4143 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678215 4143 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678219 4143 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678224 4143 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678228 4143 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678232 4143 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678236 4143 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678240 4143 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678243 4143 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678248 4143 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678254 4143 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678258 4143 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678262 4143 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678265 4143 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678269 4143 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678273 4143 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:34:54.679647 master-0 kubenswrapper[4143]: W0313 12:34:54.678287 4143 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678291 4143 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678298 4143 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678302 4143 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678307 4143 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678310 4143 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678315 4143 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678319 4143 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678323 4143 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678326 4143 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678331 4143 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678336 4143 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678339 4143 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678343 4143 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678347 4143 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678350 4143 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678355 4143 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678359 4143 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678362 4143 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:34:54.680807 master-0 kubenswrapper[4143]: W0313 12:34:54.678369 4143 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:34:54.682204 master-0 kubenswrapper[4143]: W0313 12:34:54.678403 4143 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:34:54.682204 master-0 kubenswrapper[4143]: W0313 12:34:54.678407 4143 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:34:54.682204 master-0 kubenswrapper[4143]: W0313 12:34:54.678411 4143 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:34:54.682204 master-0 kubenswrapper[4143]: W0313 12:34:54.678414 4143 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:34:54.682204 master-0 kubenswrapper[4143]: W0313 12:34:54.678418 4143 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:34:54.682204 master-0 kubenswrapper[4143]: W0313 12:34:54.678422 4143 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:34:54.682204 master-0 kubenswrapper[4143]: W0313 12:34:54.678426 4143 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:34:54.682204 master-0 kubenswrapper[4143]: W0313 12:34:54.678430 4143 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:34:54.682204 master-0 kubenswrapper[4143]: W0313 12:34:54.678435 4143 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:34:54.682204 master-0 kubenswrapper[4143]: W0313 12:34:54.678440 4143 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:34:54.682204 master-0 kubenswrapper[4143]: W0313 12:34:54.678444 4143 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:34:54.682204 master-0 kubenswrapper[4143]: W0313 12:34:54.678448 4143 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:34:54.682204 master-0 kubenswrapper[4143]: W0313 12:34:54.678452 4143 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:34:54.682204 master-0 kubenswrapper[4143]: I0313 12:34:54.678460 4143 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678597 4143 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678608 4143 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678613 4143 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678618 4143 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678625 4143 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678630 4143 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678634 4143 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678638 4143 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678642 4143 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678645 4143 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678649 4143 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678653 4143 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678656 4143 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678660 4143 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678663 4143 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678667 4143 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678672 4143 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678677 4143 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:34:54.682925 master-0 kubenswrapper[4143]: W0313 12:34:54.678680 4143 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678684 4143 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678689 4143 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678693 4143 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678697 4143 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678702 4143 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678706 4143 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678710 4143 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678714 4143 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678718 4143 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678721 4143 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678725 4143 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678729 4143 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678733 4143 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678736 4143 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678740 4143 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678744 4143 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678747 4143 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678752 4143 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678755 4143 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678759 4143 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:34:54.683654 master-0 kubenswrapper[4143]: W0313 12:34:54.678763 4143 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678766 4143 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678771 4143 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678776 4143 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678780 4143 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678784 4143 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678789 4143 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678793 4143 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678797 4143 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678800 4143 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678804 4143 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678808 4143 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678812 4143 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678815 4143 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678819 4143 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678822 4143 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678826 4143 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678830 4143 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678834 4143 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:34:54.684433 master-0 kubenswrapper[4143]: W0313 12:34:54.678838 4143 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:34:54.685034 master-0 kubenswrapper[4143]: W0313 12:34:54.678842 4143 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:34:54.685034 master-0 kubenswrapper[4143]: W0313 12:34:54.678846 4143 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:34:54.685034 master-0 kubenswrapper[4143]: W0313 12:34:54.678850 4143 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:34:54.685034 master-0 kubenswrapper[4143]: W0313 12:34:54.678853 4143 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:34:54.685034 master-0 kubenswrapper[4143]: W0313 12:34:54.678859 4143 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:34:54.685034 master-0 kubenswrapper[4143]: W0313 12:34:54.678864 4143 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:34:54.685034 master-0 kubenswrapper[4143]: W0313 12:34:54.678869 4143 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:34:54.685034 master-0 kubenswrapper[4143]: W0313 12:34:54.678874 4143 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:34:54.685034 master-0 kubenswrapper[4143]: W0313 12:34:54.678878 4143 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:34:54.685034 master-0 kubenswrapper[4143]: W0313 12:34:54.678882 4143 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:34:54.685034 master-0 kubenswrapper[4143]: W0313 12:34:54.678886 4143 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:34:54.685034 master-0 kubenswrapper[4143]: W0313 12:34:54.678890 4143 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:34:54.685034 master-0 kubenswrapper[4143]: W0313 12:34:54.678893 4143 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:34:54.685034 master-0 kubenswrapper[4143]: I0313 12:34:54.678900 4143 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:34:54.685795 master-0 kubenswrapper[4143]: I0313 12:34:54.679173 4143 server.go:940] "Client rotation is on, will bootstrap in background" Mar 13 12:34:54.685795 master-0 kubenswrapper[4143]: I0313 12:34:54.684483 4143 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 13 12:34:54.685795 master-0 kubenswrapper[4143]: I0313 12:34:54.685591 4143 server.go:997] "Starting client certificate rotation" Mar 13 12:34:54.685795 master-0 kubenswrapper[4143]: I0313 12:34:54.685629 4143 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 13 12:34:54.686781 master-0 kubenswrapper[4143]: I0313 12:34:54.686709 4143 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 12:34:54.710608 master-0 kubenswrapper[4143]: I0313 12:34:54.710504 4143 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 12:34:54.713076 master-0 kubenswrapper[4143]: I0313 12:34:54.712995 4143 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 12:34:54.717425 master-0 kubenswrapper[4143]: E0313 12:34:54.717326 4143 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:34:54.747185 master-0 kubenswrapper[4143]: I0313 12:34:54.747036 4143 log.go:25] "Validated CRI v1 runtime API" Mar 13 12:34:54.760703 master-0 kubenswrapper[4143]: I0313 12:34:54.760619 4143 log.go:25] "Validated CRI v1 image API" Mar 13 12:34:54.764445 master-0 kubenswrapper[4143]: I0313 12:34:54.764400 4143 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 12:34:54.770408 master-0 kubenswrapper[4143]: I0313 12:34:54.770258 4143 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 bee91dc0-9d5b-4e60-b908-76b0c18f6366:/dev/vda3] Mar 13 12:34:54.770408 master-0 kubenswrapper[4143]: I0313 12:34:54.770304 4143 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 13 12:34:54.795509 master-0 kubenswrapper[4143]: I0313 12:34:54.794973 4143 manager.go:217] Machine: {Timestamp:2026-03-13 12:34:54.79173674 +0000 UTC m=+0.538881154 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:8daa6345b1f242d1bcc5f3b6bc2ba573 SystemUUID:8daa6345-b1f2-42d1-bcc5-f3b6bc2ba573 BootID:5a21c0be-2989-406d-99e7-723bbc7963b9 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:68:13:a8 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:45:8d:c9 Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:0e:62:76:22:8d:d1 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 13 12:34:54.795509 master-0 kubenswrapper[4143]: I0313 12:34:54.795450 4143 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 13 12:34:54.795763 master-0 kubenswrapper[4143]: I0313 12:34:54.795722 4143 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 13 12:34:54.797142 master-0 kubenswrapper[4143]: I0313 12:34:54.797100 4143 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 13 12:34:54.797594 master-0 kubenswrapper[4143]: I0313 12:34:54.797534 4143 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 12:34:54.798024 master-0 kubenswrapper[4143]: I0313 12:34:54.797595 4143 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 12:34:54.798118 master-0 kubenswrapper[4143]: I0313 12:34:54.798087 4143 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 12:34:54.798177 master-0 kubenswrapper[4143]: I0313 12:34:54.798116 4143 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 12:34:54.798206 master-0 kubenswrapper[4143]: I0313 12:34:54.798190 4143 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 12:34:54.798281 master-0 kubenswrapper[4143]: I0313 12:34:54.798249 4143 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 12:34:54.798556 master-0 kubenswrapper[4143]: I0313 12:34:54.798517 4143 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:34:54.798740 master-0 kubenswrapper[4143]: I0313 12:34:54.798702 4143 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 13 12:34:54.806000 master-0 kubenswrapper[4143]: I0313 12:34:54.805952 4143 kubelet.go:418] "Attempting to sync node with API server" Mar 13 12:34:54.806061 master-0 kubenswrapper[4143]: I0313 12:34:54.806017 4143 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 12:34:54.806183 master-0 kubenswrapper[4143]: I0313 12:34:54.806133 4143 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 13 12:34:54.806214 master-0 kubenswrapper[4143]: I0313 12:34:54.806201 4143 kubelet.go:324] "Adding apiserver pod source" Mar 13 12:34:54.806289 master-0 kubenswrapper[4143]: I0313 12:34:54.806260 4143 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 12:34:54.812663 master-0 kubenswrapper[4143]: W0313 12:34:54.812571 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:54.812727 master-0 kubenswrapper[4143]: E0313 12:34:54.812697 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:34:54.812789 master-0 kubenswrapper[4143]: W0313 12:34:54.812750 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:54.812830 master-0 kubenswrapper[4143]: E0313 12:34:54.812788 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:34:54.827079 master-0 kubenswrapper[4143]: I0313 12:34:54.827016 4143 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 13 12:34:54.831665 master-0 kubenswrapper[4143]: I0313 12:34:54.831614 4143 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 13 12:34:54.832187 master-0 kubenswrapper[4143]: I0313 12:34:54.832123 4143 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 13 12:34:54.832266 master-0 kubenswrapper[4143]: I0313 12:34:54.832231 4143 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 13 12:34:54.832311 master-0 kubenswrapper[4143]: I0313 12:34:54.832268 4143 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 13 12:34:54.832311 master-0 kubenswrapper[4143]: I0313 12:34:54.832291 4143 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 13 12:34:54.832311 master-0 kubenswrapper[4143]: I0313 12:34:54.832305 4143 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 13 12:34:54.832410 master-0 kubenswrapper[4143]: I0313 12:34:54.832320 4143 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 13 12:34:54.832410 master-0 kubenswrapper[4143]: I0313 12:34:54.832341 4143 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 13 12:34:54.832410 master-0 kubenswrapper[4143]: I0313 12:34:54.832361 4143 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 13 12:34:54.832410 master-0 kubenswrapper[4143]: I0313 12:34:54.832384 4143 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 13 12:34:54.832410 master-0 kubenswrapper[4143]: I0313 12:34:54.832398 4143 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 13 12:34:54.832559 master-0 kubenswrapper[4143]: I0313 12:34:54.832420 4143 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 13 12:34:54.832559 master-0 kubenswrapper[4143]: I0313 12:34:54.832447 4143 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 13 12:34:54.835299 master-0 kubenswrapper[4143]: I0313 12:34:54.835276 4143 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 13 12:34:54.836320 master-0 kubenswrapper[4143]: I0313 12:34:54.836284 4143 server.go:1280] "Started kubelet" Mar 13 12:34:54.838180 master-0 kubenswrapper[4143]: I0313 12:34:54.838020 4143 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 12:34:54.838324 master-0 kubenswrapper[4143]: I0313 12:34:54.838278 4143 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 13 12:34:54.838768 master-0 kubenswrapper[4143]: I0313 12:34:54.838430 4143 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 12:34:54.838711 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 13 12:34:54.838967 master-0 kubenswrapper[4143]: I0313 12:34:54.838917 4143 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 12:34:54.840859 master-0 kubenswrapper[4143]: I0313 12:34:54.840791 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:54.844102 master-0 kubenswrapper[4143]: I0313 12:34:54.844043 4143 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 13 12:34:54.844173 master-0 kubenswrapper[4143]: I0313 12:34:54.844146 4143 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 12:34:54.845079 master-0 kubenswrapper[4143]: I0313 12:34:54.844771 4143 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 13 12:34:54.845079 master-0 kubenswrapper[4143]: I0313 12:34:54.844816 4143 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 13 12:34:54.845079 master-0 kubenswrapper[4143]: E0313 12:34:54.844792 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:34:54.845079 master-0 kubenswrapper[4143]: I0313 12:34:54.844922 4143 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 13 12:34:54.845079 master-0 kubenswrapper[4143]: I0313 12:34:54.845037 4143 reconstruct.go:97] "Volume reconstruction finished" Mar 13 12:34:54.845079 master-0 kubenswrapper[4143]: I0313 12:34:54.845049 4143 reconciler.go:26] "Reconciler: start to sync state" Mar 13 12:34:54.846368 master-0 kubenswrapper[4143]: W0313 12:34:54.846271 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:54.846460 master-0 kubenswrapper[4143]: E0313 12:34:54.846395 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:34:54.846967 master-0 kubenswrapper[4143]: I0313 12:34:54.846925 4143 factory.go:153] Registering CRI-O factory Mar 13 12:34:54.847048 master-0 kubenswrapper[4143]: I0313 12:34:54.846979 4143 factory.go:221] Registration of the crio container factory successfully Mar 13 12:34:54.847141 master-0 kubenswrapper[4143]: I0313 12:34:54.847104 4143 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 13 12:34:54.847242 master-0 kubenswrapper[4143]: I0313 12:34:54.847129 4143 factory.go:55] Registering systemd factory Mar 13 12:34:54.847242 master-0 kubenswrapper[4143]: I0313 12:34:54.847174 4143 factory.go:221] Registration of the systemd container factory successfully Mar 13 12:34:54.847242 master-0 kubenswrapper[4143]: I0313 12:34:54.847208 4143 factory.go:103] Registering Raw factory Mar 13 12:34:54.847242 master-0 kubenswrapper[4143]: I0313 12:34:54.847235 4143 manager.go:1196] Started watching for new ooms in manager Mar 13 12:34:54.848635 master-0 kubenswrapper[4143]: I0313 12:34:54.848583 4143 manager.go:319] Starting recovery of all containers Mar 13 12:34:54.849286 master-0 kubenswrapper[4143]: E0313 12:34:54.849118 4143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 13 12:34:54.870073 master-0 kubenswrapper[4143]: E0313 12:34:54.861551 4143 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c66b8418273c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.836216768 +0000 UTC m=+0.583361132,LastTimestamp:2026-03-13 12:34:54.836216768 +0000 UTC m=+0.583361132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:34:54.871861 master-0 kubenswrapper[4143]: I0313 12:34:54.871821 4143 server.go:449] "Adding debug handlers to kubelet server" Mar 13 12:34:54.873411 master-0 kubenswrapper[4143]: E0313 12:34:54.873374 4143 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 13 12:34:54.893862 master-0 kubenswrapper[4143]: I0313 12:34:54.893768 4143 manager.go:324] Recovery completed Mar 13 12:34:54.908311 master-0 kubenswrapper[4143]: I0313 12:34:54.908271 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:54.910177 master-0 kubenswrapper[4143]: I0313 12:34:54.910132 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:54.910252 master-0 kubenswrapper[4143]: I0313 12:34:54.910186 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:54.910252 master-0 kubenswrapper[4143]: I0313 12:34:54.910199 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:54.910894 master-0 kubenswrapper[4143]: I0313 12:34:54.910842 4143 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 13 12:34:54.910894 master-0 kubenswrapper[4143]: I0313 12:34:54.910888 4143 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 12:34:54.910965 master-0 kubenswrapper[4143]: I0313 12:34:54.910925 4143 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:34:54.944582 master-0 kubenswrapper[4143]: I0313 12:34:54.944497 4143 policy_none.go:49] "None policy: Start" Mar 13 12:34:54.945048 master-0 kubenswrapper[4143]: E0313 12:34:54.945002 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:34:54.946042 master-0 kubenswrapper[4143]: I0313 12:34:54.946006 4143 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 13 12:34:54.946088 master-0 kubenswrapper[4143]: I0313 12:34:54.946053 4143 state_mem.go:35] "Initializing new in-memory state store" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: E0313 12:34:55.045649 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: I0313 12:34:55.046505 4143 manager.go:334] "Starting Device Plugin manager" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: I0313 12:34:55.046563 4143 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: I0313 12:34:55.046586 4143 server.go:79] "Starting device plugin registration server" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: I0313 12:34:55.047153 4143 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: I0313 12:34:55.047443 4143 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: I0313 12:34:55.047623 4143 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: I0313 12:34:55.047756 4143 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: I0313 12:34:55.047767 4143 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: E0313 12:34:55.049366 4143 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: E0313 12:34:55.050066 4143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: I0313 12:34:55.078328 4143 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: I0313 12:34:55.080900 4143 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: I0313 12:34:55.081030 4143 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: I0313 12:34:55.081071 4143 kubelet.go:2335] "Starting kubelet main sync loop" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: E0313 12:34:55.081359 4143 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: W0313 12:34:55.082198 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:55.107291 master-0 kubenswrapper[4143]: E0313 12:34:55.082282 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:34:55.148598 master-0 kubenswrapper[4143]: I0313 12:34:55.148503 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:55.149687 master-0 kubenswrapper[4143]: I0313 12:34:55.149615 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:55.149687 master-0 kubenswrapper[4143]: I0313 12:34:55.149656 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:55.149687 master-0 kubenswrapper[4143]: I0313 12:34:55.149666 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:55.149687 master-0 kubenswrapper[4143]: I0313 12:34:55.149692 4143 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:34:55.150603 master-0 kubenswrapper[4143]: E0313 12:34:55.150563 4143 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:34:55.182278 master-0 kubenswrapper[4143]: I0313 12:34:55.182093 4143 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 12:34:55.182278 master-0 kubenswrapper[4143]: I0313 12:34:55.182277 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:55.183567 master-0 kubenswrapper[4143]: I0313 12:34:55.183498 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:55.183567 master-0 kubenswrapper[4143]: I0313 12:34:55.183576 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:55.183567 master-0 kubenswrapper[4143]: I0313 12:34:55.183588 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:55.183905 master-0 kubenswrapper[4143]: I0313 12:34:55.183741 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:55.184600 master-0 kubenswrapper[4143]: I0313 12:34:55.184524 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:34:55.184600 master-0 kubenswrapper[4143]: I0313 12:34:55.184601 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:55.185529 master-0 kubenswrapper[4143]: I0313 12:34:55.185245 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:55.185529 master-0 kubenswrapper[4143]: I0313 12:34:55.185270 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:55.185529 master-0 kubenswrapper[4143]: I0313 12:34:55.185534 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:55.185917 master-0 kubenswrapper[4143]: I0313 12:34:55.185690 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:55.185917 master-0 kubenswrapper[4143]: I0313 12:34:55.185864 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:34:55.186363 master-0 kubenswrapper[4143]: I0313 12:34:55.186315 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:55.192548 master-0 kubenswrapper[4143]: I0313 12:34:55.192470 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:55.192548 master-0 kubenswrapper[4143]: I0313 12:34:55.192524 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:55.192548 master-0 kubenswrapper[4143]: I0313 12:34:55.192536 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:55.192888 master-0 kubenswrapper[4143]: I0313 12:34:55.192747 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:55.192888 master-0 kubenswrapper[4143]: I0313 12:34:55.192810 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:55.192888 master-0 kubenswrapper[4143]: I0313 12:34:55.192834 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:55.193762 master-0 kubenswrapper[4143]: I0313 12:34:55.193097 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:55.193762 master-0 kubenswrapper[4143]: I0313 12:34:55.193149 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:55.193762 master-0 kubenswrapper[4143]: I0313 12:34:55.193160 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:55.193762 master-0 kubenswrapper[4143]: I0313 12:34:55.193348 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:55.193762 master-0 kubenswrapper[4143]: I0313 12:34:55.193598 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:34:55.193928 master-0 kubenswrapper[4143]: I0313 12:34:55.193779 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:55.195104 master-0 kubenswrapper[4143]: I0313 12:34:55.195069 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:55.195104 master-0 kubenswrapper[4143]: I0313 12:34:55.195098 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:55.195104 master-0 kubenswrapper[4143]: I0313 12:34:55.195106 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:55.195266 master-0 kubenswrapper[4143]: I0313 12:34:55.195241 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:55.195677 master-0 kubenswrapper[4143]: I0313 12:34:55.195625 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:55.195734 master-0 kubenswrapper[4143]: I0313 12:34:55.195693 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:55.195734 master-0 kubenswrapper[4143]: I0313 12:34:55.195708 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:55.195734 master-0 kubenswrapper[4143]: I0313 12:34:55.195649 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.195818 master-0 kubenswrapper[4143]: I0313 12:34:55.195756 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:55.196252 master-0 kubenswrapper[4143]: I0313 12:34:55.196037 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:55.196252 master-0 kubenswrapper[4143]: I0313 12:34:55.196074 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:55.196252 master-0 kubenswrapper[4143]: I0313 12:34:55.196087 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:55.196608 master-0 kubenswrapper[4143]: I0313 12:34:55.196585 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.196668 master-0 kubenswrapper[4143]: I0313 12:34:55.196645 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:55.196956 master-0 kubenswrapper[4143]: I0313 12:34:55.196909 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:55.196993 master-0 kubenswrapper[4143]: I0313 12:34:55.196968 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:55.197021 master-0 kubenswrapper[4143]: I0313 12:34:55.196989 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:55.197674 master-0 kubenswrapper[4143]: I0313 12:34:55.197628 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:55.197730 master-0 kubenswrapper[4143]: I0313 12:34:55.197677 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:55.197730 master-0 kubenswrapper[4143]: I0313 12:34:55.197700 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:55.246620 master-0 kubenswrapper[4143]: I0313 12:34:55.246501 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:34:55.246620 master-0 kubenswrapper[4143]: I0313 12:34:55.246568 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.246620 master-0 kubenswrapper[4143]: I0313 12:34:55.246601 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.246620 master-0 kubenswrapper[4143]: I0313 12:34:55.246629 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:34:55.246909 master-0 kubenswrapper[4143]: I0313 12:34:55.246652 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.246909 master-0 kubenswrapper[4143]: I0313 12:34:55.246677 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.246909 master-0 kubenswrapper[4143]: I0313 12:34:55.246700 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:34:55.246909 master-0 kubenswrapper[4143]: I0313 12:34:55.246729 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.246909 master-0 kubenswrapper[4143]: I0313 12:34:55.246750 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.246909 master-0 kubenswrapper[4143]: I0313 12:34:55.246843 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.246909 master-0 kubenswrapper[4143]: I0313 12:34:55.246873 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.246909 master-0 kubenswrapper[4143]: I0313 12:34:55.246894 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.247122 master-0 kubenswrapper[4143]: I0313 12:34:55.246928 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.247122 master-0 kubenswrapper[4143]: I0313 12:34:55.246950 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:34:55.247122 master-0 kubenswrapper[4143]: I0313 12:34:55.246968 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:34:55.247122 master-0 kubenswrapper[4143]: I0313 12:34:55.247003 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.247122 master-0 kubenswrapper[4143]: I0313 12:34:55.247026 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:34:55.348123 master-0 kubenswrapper[4143]: I0313 12:34:55.347978 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:34:55.348559 master-0 kubenswrapper[4143]: I0313 12:34:55.348296 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:34:55.348559 master-0 kubenswrapper[4143]: I0313 12:34:55.348399 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.348559 master-0 kubenswrapper[4143]: I0313 12:34:55.348400 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:34:55.348559 master-0 kubenswrapper[4143]: I0313 12:34:55.348459 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.348559 master-0 kubenswrapper[4143]: I0313 12:34:55.348426 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.348559 master-0 kubenswrapper[4143]: I0313 12:34:55.348506 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.348940 master-0 kubenswrapper[4143]: I0313 12:34:55.348506 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.348940 master-0 kubenswrapper[4143]: I0313 12:34:55.348411 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:34:55.348940 master-0 kubenswrapper[4143]: I0313 12:34:55.348678 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.348940 master-0 kubenswrapper[4143]: I0313 12:34:55.348704 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.348940 master-0 kubenswrapper[4143]: I0313 12:34:55.348752 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.348940 master-0 kubenswrapper[4143]: I0313 12:34:55.348547 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.348940 master-0 kubenswrapper[4143]: I0313 12:34:55.348828 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:34:55.348940 master-0 kubenswrapper[4143]: I0313 12:34:55.348892 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:34:55.348940 master-0 kubenswrapper[4143]: I0313 12:34:55.348948 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.349520 master-0 kubenswrapper[4143]: I0313 12:34:55.348978 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:34:55.349520 master-0 kubenswrapper[4143]: I0313 12:34:55.348996 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:34:55.349520 master-0 kubenswrapper[4143]: I0313 12:34:55.349065 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.349520 master-0 kubenswrapper[4143]: I0313 12:34:55.349071 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.349520 master-0 kubenswrapper[4143]: I0313 12:34:55.349125 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.349520 master-0 kubenswrapper[4143]: I0313 12:34:55.349218 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.349520 master-0 kubenswrapper[4143]: I0313 12:34:55.349219 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.349520 master-0 kubenswrapper[4143]: I0313 12:34:55.349254 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.349520 master-0 kubenswrapper[4143]: I0313 12:34:55.349311 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.349520 master-0 kubenswrapper[4143]: I0313 12:34:55.349281 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.349520 master-0 kubenswrapper[4143]: I0313 12:34:55.349325 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.349520 master-0 kubenswrapper[4143]: I0313 12:34:55.349401 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:34:55.349520 master-0 kubenswrapper[4143]: I0313 12:34:55.349452 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.349520 master-0 kubenswrapper[4143]: I0313 12:34:55.349475 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.349520 master-0 kubenswrapper[4143]: I0313 12:34:55.349501 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:34:55.350609 master-0 kubenswrapper[4143]: I0313 12:34:55.349580 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.350609 master-0 kubenswrapper[4143]: I0313 12:34:55.349651 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:34:55.350609 master-0 kubenswrapper[4143]: I0313 12:34:55.349656 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:34:55.351215 master-0 kubenswrapper[4143]: I0313 12:34:55.351156 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:55.352867 master-0 kubenswrapper[4143]: I0313 12:34:55.352798 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:55.353006 master-0 kubenswrapper[4143]: I0313 12:34:55.352890 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:55.353006 master-0 kubenswrapper[4143]: I0313 12:34:55.352913 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:55.353129 master-0 kubenswrapper[4143]: I0313 12:34:55.353063 4143 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:34:55.354553 master-0 kubenswrapper[4143]: E0313 12:34:55.354465 4143 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:34:55.452528 master-0 kubenswrapper[4143]: E0313 12:34:55.452280 4143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 13 12:34:55.525043 master-0 kubenswrapper[4143]: I0313 12:34:55.524940 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:34:55.533767 master-0 kubenswrapper[4143]: I0313 12:34:55.533706 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:34:55.555257 master-0 kubenswrapper[4143]: I0313 12:34:55.555168 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:34:55.586416 master-0 kubenswrapper[4143]: I0313 12:34:55.586336 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:34:55.593359 master-0 kubenswrapper[4143]: I0313 12:34:55.593315 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:34:55.659935 master-0 kubenswrapper[4143]: W0313 12:34:55.659830 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:55.659935 master-0 kubenswrapper[4143]: E0313 12:34:55.659915 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:34:55.754819 master-0 kubenswrapper[4143]: I0313 12:34:55.754634 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:55.756263 master-0 kubenswrapper[4143]: I0313 12:34:55.756213 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:55.756339 master-0 kubenswrapper[4143]: I0313 12:34:55.756268 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:55.756339 master-0 kubenswrapper[4143]: I0313 12:34:55.756286 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:55.756445 master-0 kubenswrapper[4143]: I0313 12:34:55.756345 4143 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:34:55.757232 master-0 kubenswrapper[4143]: E0313 12:34:55.757130 4143 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:34:55.842345 master-0 kubenswrapper[4143]: I0313 12:34:55.842254 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:56.025528 master-0 kubenswrapper[4143]: W0313 12:34:56.025430 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:56.025528 master-0 kubenswrapper[4143]: E0313 12:34:56.025499 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:34:56.228810 master-0 kubenswrapper[4143]: W0313 12:34:56.228645 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:56.228810 master-0 kubenswrapper[4143]: E0313 12:34:56.228754 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:34:56.253771 master-0 kubenswrapper[4143]: E0313 12:34:56.253696 4143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 13 12:34:56.558298 master-0 kubenswrapper[4143]: I0313 12:34:56.558173 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:56.559712 master-0 kubenswrapper[4143]: I0313 12:34:56.559663 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:56.559841 master-0 kubenswrapper[4143]: I0313 12:34:56.559738 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:56.559841 master-0 kubenswrapper[4143]: I0313 12:34:56.559751 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:56.559962 master-0 kubenswrapper[4143]: I0313 12:34:56.559897 4143 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:34:56.561091 master-0 kubenswrapper[4143]: E0313 12:34:56.561005 4143 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:34:56.651179 master-0 kubenswrapper[4143]: W0313 12:34:56.650991 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:56.651179 master-0 kubenswrapper[4143]: E0313 12:34:56.651140 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:34:56.814711 master-0 kubenswrapper[4143]: I0313 12:34:56.814524 4143 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 12:34:56.816669 master-0 kubenswrapper[4143]: E0313 12:34:56.816570 4143 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:34:56.842932 master-0 kubenswrapper[4143]: I0313 12:34:56.842820 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:57.701949 master-0 kubenswrapper[4143]: W0313 12:34:57.701832 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod354f29997baa583b6238f7de9108ee10.slice/crio-716ce6662fa89fc5efc984950f9c70517944c523cdede22247748de4ca23948d WatchSource:0}: Error finding container 716ce6662fa89fc5efc984950f9c70517944c523cdede22247748de4ca23948d: Status 404 returned error can't find the container with id 716ce6662fa89fc5efc984950f9c70517944c523cdede22247748de4ca23948d Mar 13 12:34:57.710783 master-0 kubenswrapper[4143]: I0313 12:34:57.710734 4143 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:34:57.716881 master-0 kubenswrapper[4143]: W0313 12:34:57.716816 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a56802af72ce1aac6b5077f1695ac0.slice/crio-d54f9c86fd46be5581997805399dc61e82749fea5be883d188b4c6364d1d55b9 WatchSource:0}: Error finding container d54f9c86fd46be5581997805399dc61e82749fea5be883d188b4c6364d1d55b9: Status 404 returned error can't find the container with id d54f9c86fd46be5581997805399dc61e82749fea5be883d188b4c6364d1d55b9 Mar 13 12:34:57.742183 master-0 kubenswrapper[4143]: W0313 12:34:57.742098 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f77c8e18b751d90bc0dfe2d4e304050.slice/crio-4a6cc550d523ce1bfed748c19240f1c4e3a9202060aead91cc14af91ea48f5ce WatchSource:0}: Error finding container 4a6cc550d523ce1bfed748c19240f1c4e3a9202060aead91cc14af91ea48f5ce: Status 404 returned error can't find the container with id 4a6cc550d523ce1bfed748c19240f1c4e3a9202060aead91cc14af91ea48f5ce Mar 13 12:34:57.782159 master-0 kubenswrapper[4143]: W0313 12:34:57.782075 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf78c05e1499b533b83f091333d61f045.slice/crio-9b912cc2fb7f1246b6e0fb7957cb5c167f818087772406214ca1bd3f180298fb WatchSource:0}: Error finding container 9b912cc2fb7f1246b6e0fb7957cb5c167f818087772406214ca1bd3f180298fb: Status 404 returned error can't find the container with id 9b912cc2fb7f1246b6e0fb7957cb5c167f818087772406214ca1bd3f180298fb Mar 13 12:34:57.842357 master-0 kubenswrapper[4143]: I0313 12:34:57.842255 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:57.855380 master-0 kubenswrapper[4143]: E0313 12:34:57.855282 4143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 13 12:34:57.903775 master-0 kubenswrapper[4143]: W0313 12:34:57.903677 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9add8df47182fc2eaf8cd78016ebe72.slice/crio-a13f1b34007cf32fe962f7d50d2988f0f66eb3022aee3b3a767d84bde6caed30 WatchSource:0}: Error finding container a13f1b34007cf32fe962f7d50d2988f0f66eb3022aee3b3a767d84bde6caed30: Status 404 returned error can't find the container with id a13f1b34007cf32fe962f7d50d2988f0f66eb3022aee3b3a767d84bde6caed30 Mar 13 12:34:58.090651 master-0 kubenswrapper[4143]: I0313 12:34:58.090452 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"9b912cc2fb7f1246b6e0fb7957cb5c167f818087772406214ca1bd3f180298fb"} Mar 13 12:34:58.091731 master-0 kubenswrapper[4143]: I0313 12:34:58.091662 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"4a6cc550d523ce1bfed748c19240f1c4e3a9202060aead91cc14af91ea48f5ce"} Mar 13 12:34:58.093172 master-0 kubenswrapper[4143]: I0313 12:34:58.093072 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"d54f9c86fd46be5581997805399dc61e82749fea5be883d188b4c6364d1d55b9"} Mar 13 12:34:58.094668 master-0 kubenswrapper[4143]: I0313 12:34:58.094592 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"716ce6662fa89fc5efc984950f9c70517944c523cdede22247748de4ca23948d"} Mar 13 12:34:58.095953 master-0 kubenswrapper[4143]: I0313 12:34:58.095883 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"a13f1b34007cf32fe962f7d50d2988f0f66eb3022aee3b3a767d84bde6caed30"} Mar 13 12:34:58.096212 master-0 kubenswrapper[4143]: W0313 12:34:58.096111 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:58.096290 master-0 kubenswrapper[4143]: E0313 12:34:58.096246 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:34:58.162223 master-0 kubenswrapper[4143]: I0313 12:34:58.162104 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:34:58.163637 master-0 kubenswrapper[4143]: I0313 12:34:58.163574 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:34:58.163721 master-0 kubenswrapper[4143]: I0313 12:34:58.163653 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:34:58.163721 master-0 kubenswrapper[4143]: I0313 12:34:58.163684 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:34:58.163825 master-0 kubenswrapper[4143]: I0313 12:34:58.163780 4143 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:34:58.165032 master-0 kubenswrapper[4143]: E0313 12:34:58.164956 4143 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:34:58.295943 master-0 kubenswrapper[4143]: W0313 12:34:58.295866 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:58.295943 master-0 kubenswrapper[4143]: E0313 12:34:58.295942 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:34:58.607909 master-0 kubenswrapper[4143]: W0313 12:34:58.607840 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:58.607909 master-0 kubenswrapper[4143]: E0313 12:34:58.607904 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:34:58.841887 master-0 kubenswrapper[4143]: I0313 12:34:58.841840 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:58.955322 master-0 kubenswrapper[4143]: W0313 12:34:58.955201 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:34:58.955322 master-0 kubenswrapper[4143]: E0313 12:34:58.955258 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:34:59.842619 master-0 kubenswrapper[4143]: I0313 12:34:59.842556 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:35:00.687601 master-0 kubenswrapper[4143]: E0313 12:35:00.687210 4143 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c66b8418273c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.836216768 +0000 UTC m=+0.583361132,LastTimestamp:2026-03-13 12:34:54.836216768 +0000 UTC m=+0.583361132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:00.842673 master-0 kubenswrapper[4143]: I0313 12:35:00.842559 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:35:01.004631 master-0 kubenswrapper[4143]: I0313 12:35:01.004592 4143 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 12:35:01.005783 master-0 kubenswrapper[4143]: E0313 12:35:01.005757 4143 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:35:01.056455 master-0 kubenswrapper[4143]: E0313 12:35:01.056382 4143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 13 12:35:01.104211 master-0 kubenswrapper[4143]: I0313 12:35:01.103910 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"c8e034500e686ef70dacdb42d92b730454c21d98abd545c3173a8492bf764cbb"} Mar 13 12:35:01.104211 master-0 kubenswrapper[4143]: I0313 12:35:01.103961 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"e408fc0e8cb4ee12255385245e6376d6aaefa9c98b225370a726fb0b9f89662c"} Mar 13 12:35:01.104211 master-0 kubenswrapper[4143]: I0313 12:35:01.103973 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:01.105125 master-0 kubenswrapper[4143]: I0313 12:35:01.105095 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:01.105214 master-0 kubenswrapper[4143]: I0313 12:35:01.105128 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:01.105214 master-0 kubenswrapper[4143]: I0313 12:35:01.105155 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:01.105283 master-0 kubenswrapper[4143]: I0313 12:35:01.105195 4143 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="d97124951202d97d2b090945a6d5c9c5add42850ba499052ed07d95631932324" exitCode=0 Mar 13 12:35:01.105283 master-0 kubenswrapper[4143]: I0313 12:35:01.105241 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"d97124951202d97d2b090945a6d5c9c5add42850ba499052ed07d95631932324"} Mar 13 12:35:01.105435 master-0 kubenswrapper[4143]: I0313 12:35:01.105413 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:01.106107 master-0 kubenswrapper[4143]: I0313 12:35:01.106088 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:01.106176 master-0 kubenswrapper[4143]: I0313 12:35:01.106116 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:01.106176 master-0 kubenswrapper[4143]: I0313 12:35:01.106127 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:01.365826 master-0 kubenswrapper[4143]: I0313 12:35:01.365178 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:01.366803 master-0 kubenswrapper[4143]: I0313 12:35:01.366766 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:01.366876 master-0 kubenswrapper[4143]: I0313 12:35:01.366810 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:01.366876 master-0 kubenswrapper[4143]: I0313 12:35:01.366824 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:01.366961 master-0 kubenswrapper[4143]: I0313 12:35:01.366895 4143 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:35:01.367828 master-0 kubenswrapper[4143]: E0313 12:35:01.367792 4143 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:35:01.843597 master-0 kubenswrapper[4143]: I0313 12:35:01.843463 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:35:02.108617 master-0 kubenswrapper[4143]: I0313 12:35:02.108517 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 13 12:35:02.109103 master-0 kubenswrapper[4143]: I0313 12:35:02.109065 4143 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="3cf1468ffdf8c8aee8d7c402643e22776ed27b795103f124d42c7153d31fee8b" exitCode=1 Mar 13 12:35:02.109103 master-0 kubenswrapper[4143]: I0313 12:35:02.109091 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"3cf1468ffdf8c8aee8d7c402643e22776ed27b795103f124d42c7153d31fee8b"} Mar 13 12:35:02.109213 master-0 kubenswrapper[4143]: I0313 12:35:02.109123 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:02.109213 master-0 kubenswrapper[4143]: I0313 12:35:02.109181 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:02.110030 master-0 kubenswrapper[4143]: I0313 12:35:02.110000 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:02.110096 master-0 kubenswrapper[4143]: I0313 12:35:02.110032 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:02.110096 master-0 kubenswrapper[4143]: I0313 12:35:02.110044 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:02.110262 master-0 kubenswrapper[4143]: I0313 12:35:02.110239 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:02.110301 master-0 kubenswrapper[4143]: I0313 12:35:02.110267 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:02.110301 master-0 kubenswrapper[4143]: I0313 12:35:02.110276 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:02.110399 master-0 kubenswrapper[4143]: I0313 12:35:02.110350 4143 scope.go:117] "RemoveContainer" containerID="3cf1468ffdf8c8aee8d7c402643e22776ed27b795103f124d42c7153d31fee8b" Mar 13 12:35:02.571020 master-0 kubenswrapper[4143]: W0313 12:35:02.570929 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:35:02.571020 master-0 kubenswrapper[4143]: E0313 12:35:02.571012 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:35:02.629037 master-0 kubenswrapper[4143]: W0313 12:35:02.628932 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:35:02.629037 master-0 kubenswrapper[4143]: E0313 12:35:02.629021 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:35:02.842872 master-0 kubenswrapper[4143]: I0313 12:35:02.842734 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:35:03.011815 master-0 kubenswrapper[4143]: W0313 12:35:03.011714 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:35:03.011815 master-0 kubenswrapper[4143]: E0313 12:35:03.011796 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:35:03.327589 master-0 kubenswrapper[4143]: W0313 12:35:03.327482 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:35:03.327858 master-0 kubenswrapper[4143]: E0313 12:35:03.327587 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:35:03.841898 master-0 kubenswrapper[4143]: I0313 12:35:03.841835 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:35:04.842783 master-0 kubenswrapper[4143]: I0313 12:35:04.842715 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:35:05.049491 master-0 kubenswrapper[4143]: E0313 12:35:05.049452 4143 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 12:35:05.119930 master-0 kubenswrapper[4143]: I0313 12:35:05.119899 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 13 12:35:05.120317 master-0 kubenswrapper[4143]: I0313 12:35:05.120286 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"ae4874222b05b1b8dbd82518131214bc2e05907a9a188ed0c3e21953b82f48b2"} Mar 13 12:35:05.120394 master-0 kubenswrapper[4143]: I0313 12:35:05.120372 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:05.122301 master-0 kubenswrapper[4143]: I0313 12:35:05.122275 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:05.122352 master-0 kubenswrapper[4143]: I0313 12:35:05.122310 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:05.122352 master-0 kubenswrapper[4143]: I0313 12:35:05.122320 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:05.841993 master-0 kubenswrapper[4143]: I0313 12:35:05.841892 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:35:06.128975 master-0 kubenswrapper[4143]: I0313 12:35:06.128798 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"23aef1d459d801451207b22b103d82e16b0fb29eac9febd8e8918cd59b44679c"} Mar 13 12:35:06.128975 master-0 kubenswrapper[4143]: I0313 12:35:06.128887 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:06.130200 master-0 kubenswrapper[4143]: I0313 12:35:06.130124 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:06.130200 master-0 kubenswrapper[4143]: I0313 12:35:06.130190 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:06.130200 master-0 kubenswrapper[4143]: I0313 12:35:06.130202 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:06.130939 master-0 kubenswrapper[4143]: I0313 12:35:06.130894 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 13 12:35:06.131624 master-0 kubenswrapper[4143]: I0313 12:35:06.131450 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 13 12:35:06.132413 master-0 kubenswrapper[4143]: I0313 12:35:06.132223 4143 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="ae4874222b05b1b8dbd82518131214bc2e05907a9a188ed0c3e21953b82f48b2" exitCode=1 Mar 13 12:35:06.132413 master-0 kubenswrapper[4143]: I0313 12:35:06.132294 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"ae4874222b05b1b8dbd82518131214bc2e05907a9a188ed0c3e21953b82f48b2"} Mar 13 12:35:06.132413 master-0 kubenswrapper[4143]: I0313 12:35:06.132368 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:06.132413 master-0 kubenswrapper[4143]: I0313 12:35:06.132397 4143 scope.go:117] "RemoveContainer" containerID="3cf1468ffdf8c8aee8d7c402643e22776ed27b795103f124d42c7153d31fee8b" Mar 13 12:35:06.133994 master-0 kubenswrapper[4143]: I0313 12:35:06.133882 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:06.133994 master-0 kubenswrapper[4143]: I0313 12:35:06.133913 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:06.133994 master-0 kubenswrapper[4143]: I0313 12:35:06.133921 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:06.134380 master-0 kubenswrapper[4143]: I0313 12:35:06.134243 4143 scope.go:117] "RemoveContainer" containerID="ae4874222b05b1b8dbd82518131214bc2e05907a9a188ed0c3e21953b82f48b2" Mar 13 12:35:06.134380 master-0 kubenswrapper[4143]: E0313 12:35:06.134367 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 12:35:06.134925 master-0 kubenswrapper[4143]: I0313 12:35:06.134733 4143 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="9976faf535c3de998191b8eb2224b47994a3c8d30cd6f57ea4e1d4aff13da677" exitCode=1 Mar 13 12:35:06.134925 master-0 kubenswrapper[4143]: I0313 12:35:06.134830 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"9976faf535c3de998191b8eb2224b47994a3c8d30cd6f57ea4e1d4aff13da677"} Mar 13 12:35:06.136848 master-0 kubenswrapper[4143]: I0313 12:35:06.136054 4143 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="a3279720d4c802c349d222cf1b96260384211d9adc25c84b50972505c95ca211" exitCode=0 Mar 13 12:35:06.136848 master-0 kubenswrapper[4143]: I0313 12:35:06.136087 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"a3279720d4c802c349d222cf1b96260384211d9adc25c84b50972505c95ca211"} Mar 13 12:35:06.136848 master-0 kubenswrapper[4143]: I0313 12:35:06.136169 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:06.136848 master-0 kubenswrapper[4143]: I0313 12:35:06.136640 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:06.136848 master-0 kubenswrapper[4143]: I0313 12:35:06.136653 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:06.136848 master-0 kubenswrapper[4143]: I0313 12:35:06.136660 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:06.139367 master-0 kubenswrapper[4143]: I0313 12:35:06.139307 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:06.140285 master-0 kubenswrapper[4143]: I0313 12:35:06.140215 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:06.140285 master-0 kubenswrapper[4143]: I0313 12:35:06.140265 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:06.140285 master-0 kubenswrapper[4143]: I0313 12:35:06.140282 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:07.140890 master-0 kubenswrapper[4143]: I0313 12:35:07.140836 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"f3be2171b1690f9bafcc889e55d83ff1a441baaed77d90117edebfc3db8ff2b9"} Mar 13 12:35:07.142346 master-0 kubenswrapper[4143]: I0313 12:35:07.142319 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 13 12:35:07.143453 master-0 kubenswrapper[4143]: I0313 12:35:07.143372 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:07.143453 master-0 kubenswrapper[4143]: I0313 12:35:07.143390 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:07.145568 master-0 kubenswrapper[4143]: I0313 12:35:07.144905 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:07.145568 master-0 kubenswrapper[4143]: I0313 12:35:07.144934 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:07.145568 master-0 kubenswrapper[4143]: I0313 12:35:07.144946 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:07.147293 master-0 kubenswrapper[4143]: I0313 12:35:07.147252 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:07.147365 master-0 kubenswrapper[4143]: I0313 12:35:07.147302 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:07.147365 master-0 kubenswrapper[4143]: I0313 12:35:07.147315 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:07.147716 master-0 kubenswrapper[4143]: I0313 12:35:07.147692 4143 scope.go:117] "RemoveContainer" containerID="ae4874222b05b1b8dbd82518131214bc2e05907a9a188ed0c3e21953b82f48b2" Mar 13 12:35:07.147890 master-0 kubenswrapper[4143]: E0313 12:35:07.147851 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 12:35:07.769571 master-0 kubenswrapper[4143]: I0313 12:35:07.769518 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:07.770786 master-0 kubenswrapper[4143]: I0313 12:35:07.770750 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:07.770866 master-0 kubenswrapper[4143]: I0313 12:35:07.770798 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:07.770866 master-0 kubenswrapper[4143]: I0313 12:35:07.770812 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:07.770919 master-0 kubenswrapper[4143]: I0313 12:35:07.770874 4143 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:35:07.814237 master-0 kubenswrapper[4143]: I0313 12:35:07.814170 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:07.814237 master-0 kubenswrapper[4143]: E0313 12:35:07.814171 4143 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 13 12:35:07.814237 master-0 kubenswrapper[4143]: E0313 12:35:07.814169 4143 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 12:35:07.846370 master-0 kubenswrapper[4143]: I0313 12:35:07.846319 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:08.147589 master-0 kubenswrapper[4143]: I0313 12:35:08.147457 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"982c1c225b535e0fa3c9e5b01c4c3960b52c601ea135812c4af51bc13c9b4e1a"} Mar 13 12:35:08.147589 master-0 kubenswrapper[4143]: I0313 12:35:08.147523 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:08.148304 master-0 kubenswrapper[4143]: I0313 12:35:08.148267 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:08.148372 master-0 kubenswrapper[4143]: I0313 12:35:08.148321 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:08.148372 master-0 kubenswrapper[4143]: I0313 12:35:08.148337 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:08.148697 master-0 kubenswrapper[4143]: I0313 12:35:08.148668 4143 scope.go:117] "RemoveContainer" containerID="9976faf535c3de998191b8eb2224b47994a3c8d30cd6f57ea4e1d4aff13da677" Mar 13 12:35:08.883099 master-0 kubenswrapper[4143]: I0313 12:35:08.882441 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:09.151513 master-0 kubenswrapper[4143]: I0313 12:35:09.151407 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"5f035fb00c2f1c52dbc78fa55ac7bc8d27c14c42f3da11b968e1fb6e88e80856"} Mar 13 12:35:09.151917 master-0 kubenswrapper[4143]: I0313 12:35:09.151526 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:09.152093 master-0 kubenswrapper[4143]: I0313 12:35:09.152066 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:09.152128 master-0 kubenswrapper[4143]: I0313 12:35:09.152093 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:09.152128 master-0 kubenswrapper[4143]: I0313 12:35:09.152102 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:09.299810 master-0 kubenswrapper[4143]: W0313 12:35:09.299729 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 13 12:35:09.300010 master-0 kubenswrapper[4143]: E0313 12:35:09.299820 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 12:35:09.634486 master-0 kubenswrapper[4143]: I0313 12:35:09.634379 4143 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 12:35:09.678192 master-0 kubenswrapper[4143]: I0313 12:35:09.678124 4143 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 13 12:35:09.846963 master-0 kubenswrapper[4143]: I0313 12:35:09.846880 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:10.158369 master-0 kubenswrapper[4143]: I0313 12:35:10.158212 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"838f1203bfc2909f5be268d039e5903c4aada457bcd573b0395f4215bfc0c446"} Mar 13 12:35:10.158369 master-0 kubenswrapper[4143]: I0313 12:35:10.158309 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:10.158868 master-0 kubenswrapper[4143]: I0313 12:35:10.158405 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:10.159881 master-0 kubenswrapper[4143]: I0313 12:35:10.159835 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:10.159881 master-0 kubenswrapper[4143]: I0313 12:35:10.159869 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:10.159966 master-0 kubenswrapper[4143]: I0313 12:35:10.159888 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:10.159966 master-0 kubenswrapper[4143]: I0313 12:35:10.159904 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:10.159966 master-0 kubenswrapper[4143]: I0313 12:35:10.159904 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:10.159966 master-0 kubenswrapper[4143]: I0313 12:35:10.159922 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:10.483551 master-0 kubenswrapper[4143]: I0313 12:35:10.483421 4143 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:35:10.487602 master-0 kubenswrapper[4143]: I0313 12:35:10.487560 4143 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:35:10.694066 master-0 kubenswrapper[4143]: E0313 12:35:10.693939 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b8418273c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.836216768 +0000 UTC m=+0.583361132,LastTimestamp:2026-03-13 12:34:54.836216768 +0000 UTC m=+0.583361132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.698692 master-0 kubenswrapper[4143]: E0313 12:35:10.698536 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eae4ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910170349 +0000 UTC m=+0.657314673,LastTimestamp:2026-03-13 12:34:54.910170349 +0000 UTC m=+0.657314673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.703616 master-0 kubenswrapper[4143]: E0313 12:35:10.703517 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eb459b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910195099 +0000 UTC m=+0.657339423,LastTimestamp:2026-03-13 12:34:54.910195099 +0000 UTC m=+0.657339423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.708448 master-0 kubenswrapper[4143]: E0313 12:35:10.708341 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eb7011 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910205969 +0000 UTC m=+0.657350293,LastTimestamp:2026-03-13 12:34:54.910205969 +0000 UTC m=+0.657350293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.713339 master-0 kubenswrapper[4143]: E0313 12:35:10.713222 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b84e8948ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:55.054768383 +0000 UTC m=+0.801912717,LastTimestamp:2026-03-13 12:34:55.054768383 +0000 UTC m=+0.801912717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.718614 master-0 kubenswrapper[4143]: E0313 12:35:10.718480 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eae4ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eae4ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910170349 +0000 UTC m=+0.657314673,LastTimestamp:2026-03-13 12:34:55.149646778 +0000 UTC m=+0.896791102,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.723053 master-0 kubenswrapper[4143]: E0313 12:35:10.722968 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eb459b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eb459b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910195099 +0000 UTC m=+0.657339423,LastTimestamp:2026-03-13 12:34:55.149663648 +0000 UTC m=+0.896807972,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.728334 master-0 kubenswrapper[4143]: E0313 12:35:10.728117 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eb7011\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eb7011 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910205969 +0000 UTC m=+0.657350293,LastTimestamp:2026-03-13 12:34:55.149671288 +0000 UTC m=+0.896815612,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.733838 master-0 kubenswrapper[4143]: E0313 12:35:10.733628 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eae4ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eae4ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910170349 +0000 UTC m=+0.657314673,LastTimestamp:2026-03-13 12:34:55.183559515 +0000 UTC m=+0.930703839,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.738962 master-0 kubenswrapper[4143]: E0313 12:35:10.738800 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eb459b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eb459b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910195099 +0000 UTC m=+0.657339423,LastTimestamp:2026-03-13 12:34:55.183583325 +0000 UTC m=+0.930727649,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.744557 master-0 kubenswrapper[4143]: E0313 12:35:10.744412 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eb7011\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eb7011 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910205969 +0000 UTC m=+0.657350293,LastTimestamp:2026-03-13 12:34:55.183594185 +0000 UTC m=+0.930738509,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.749940 master-0 kubenswrapper[4143]: E0313 12:35:10.749805 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eae4ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eae4ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910170349 +0000 UTC m=+0.657314673,LastTimestamp:2026-03-13 12:34:55.185260699 +0000 UTC m=+0.932405053,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.756360 master-0 kubenswrapper[4143]: E0313 12:35:10.756179 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eb459b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eb459b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910195099 +0000 UTC m=+0.657339423,LastTimestamp:2026-03-13 12:34:55.185520507 +0000 UTC m=+0.932664831,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.762707 master-0 kubenswrapper[4143]: E0313 12:35:10.762516 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eb7011\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eb7011 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910205969 +0000 UTC m=+0.657350293,LastTimestamp:2026-03-13 12:34:55.185548897 +0000 UTC m=+0.932693221,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.767975 master-0 kubenswrapper[4143]: E0313 12:35:10.767818 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eae4ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eae4ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910170349 +0000 UTC m=+0.657314673,LastTimestamp:2026-03-13 12:34:55.192505402 +0000 UTC m=+0.939649726,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.777106 master-0 kubenswrapper[4143]: E0313 12:35:10.776971 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eb459b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eb459b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910195099 +0000 UTC m=+0.657339423,LastTimestamp:2026-03-13 12:34:55.192531432 +0000 UTC m=+0.939675756,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.781771 master-0 kubenswrapper[4143]: E0313 12:35:10.781611 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eb7011\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eb7011 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910205969 +0000 UTC m=+0.657350293,LastTimestamp:2026-03-13 12:34:55.192542232 +0000 UTC m=+0.939686556,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.785862 master-0 kubenswrapper[4143]: E0313 12:35:10.785731 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eae4ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eae4ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910170349 +0000 UTC m=+0.657314673,LastTimestamp:2026-03-13 12:34:55.192796592 +0000 UTC m=+0.939940916,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.789836 master-0 kubenswrapper[4143]: E0313 12:35:10.789713 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eb459b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eb459b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910195099 +0000 UTC m=+0.657339423,LastTimestamp:2026-03-13 12:34:55.192827962 +0000 UTC m=+0.939972286,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.795274 master-0 kubenswrapper[4143]: E0313 12:35:10.795091 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eb7011\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eb7011 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910205969 +0000 UTC m=+0.657350293,LastTimestamp:2026-03-13 12:34:55.192840272 +0000 UTC m=+0.939984596,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.801193 master-0 kubenswrapper[4143]: E0313 12:35:10.801019 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eae4ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eae4ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910170349 +0000 UTC m=+0.657314673,LastTimestamp:2026-03-13 12:34:55.193119631 +0000 UTC m=+0.940263955,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.806499 master-0 kubenswrapper[4143]: E0313 12:35:10.806350 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eb459b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eb459b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910195099 +0000 UTC m=+0.657339423,LastTimestamp:2026-03-13 12:34:55.19315589 +0000 UTC m=+0.940300214,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.810908 master-0 kubenswrapper[4143]: E0313 12:35:10.810732 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eb7011\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eb7011 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910205969 +0000 UTC m=+0.657350293,LastTimestamp:2026-03-13 12:34:55.19316638 +0000 UTC m=+0.940310714,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.816183 master-0 kubenswrapper[4143]: E0313 12:35:10.816055 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eae4ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eae4ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910170349 +0000 UTC m=+0.657314673,LastTimestamp:2026-03-13 12:34:55.195089283 +0000 UTC m=+0.942233607,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.821285 master-0 kubenswrapper[4143]: E0313 12:35:10.821220 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66b845eb459b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66b845eb459b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:54.910195099 +0000 UTC m=+0.657339423,LastTimestamp:2026-03-13 12:34:55.195103543 +0000 UTC m=+0.942247867,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.826778 master-0 kubenswrapper[4143]: E0313 12:35:10.826622 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c66b8ecd70839 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:57.710663737 +0000 UTC m=+3.457808061,LastTimestamp:2026-03-13 12:34:57.710663737 +0000 UTC m=+3.457808061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.832153 master-0 kubenswrapper[4143]: E0313 12:35:10.831911 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c66b8ed8d9b62 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:57.722628962 +0000 UTC m=+3.469773296,LastTimestamp:2026-03-13 12:34:57.722628962 +0000 UTC m=+3.469773296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.837795 master-0 kubenswrapper[4143]: E0313 12:35:10.837600 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66b8eed4fbed openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:57.744083949 +0000 UTC m=+3.491228273,LastTimestamp:2026-03-13 12:34:57.744083949 +0000 UTC m=+3.491228273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.841777 master-0 kubenswrapper[4143]: I0313 12:35:10.841753 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:10.841908 master-0 kubenswrapper[4143]: E0313 12:35:10.841798 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66b8f13e050b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:57.784521995 +0000 UTC m=+3.531666319,LastTimestamp:2026-03-13 12:34:57.784521995 +0000 UTC m=+3.531666319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.848017 master-0 kubenswrapper[4143]: E0313 12:35:10.847896 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66b8f8802a6b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:34:57.906297451 +0000 UTC m=+3.653441775,LastTimestamp:2026-03-13 12:34:57.906297451 +0000 UTC m=+3.653441775,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.853037 master-0 kubenswrapper[4143]: E0313 12:35:10.852873 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66b982ccecd1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" in 2.32s (2.32s including waiting). Image size: 465086330 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:00.226583761 +0000 UTC m=+5.973728095,LastTimestamp:2026-03-13 12:35:00.226583761 +0000 UTC m=+5.973728095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.859867 master-0 kubenswrapper[4143]: E0313 12:35:10.859702 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c66b983bbdb8c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" in 2.531s (2.531s including waiting). Image size: 529324693 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:00.242242444 +0000 UTC m=+5.989386768,LastTimestamp:2026-03-13 12:35:00.242242444 +0000 UTC m=+5.989386768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.864963 master-0 kubenswrapper[4143]: E0313 12:35:10.864849 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c66b98f99f8e9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:00.441348329 +0000 UTC m=+6.188492663,LastTimestamp:2026-03-13 12:35:00.441348329 +0000 UTC m=+6.188492663,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.869507 master-0 kubenswrapper[4143]: E0313 12:35:10.869370 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66b98fc5bd84 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:00.444216708 +0000 UTC m=+6.191361032,LastTimestamp:2026-03-13 12:35:00.444216708 +0000 UTC m=+6.191361032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.874206 master-0 kubenswrapper[4143]: E0313 12:35:10.874072 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c66b9909bc608 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:00.458243592 +0000 UTC m=+6.205387916,LastTimestamp:2026-03-13 12:35:00.458243592 +0000 UTC m=+6.205387916,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.878771 master-0 kubenswrapper[4143]: E0313 12:35:10.878668 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66b9909d482c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:00.458342444 +0000 UTC m=+6.205486778,LastTimestamp:2026-03-13 12:35:00.458342444 +0000 UTC m=+6.205486778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.884394 master-0 kubenswrapper[4143]: E0313 12:35:10.884274 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c66b990f89b99 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:00.464327577 +0000 UTC m=+6.211471911,LastTimestamp:2026-03-13 12:35:00.464327577 +0000 UTC m=+6.211471911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.888741 master-0 kubenswrapper[4143]: E0313 12:35:10.888581 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c66b99a6d2632 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:00.622960178 +0000 UTC m=+6.370104502,LastTimestamp:2026-03-13 12:35:00.622960178 +0000 UTC m=+6.370104502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.893059 master-0 kubenswrapper[4143]: E0313 12:35:10.892957 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c66b99b103955 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:00.633647445 +0000 UTC m=+6.380791769,LastTimestamp:2026-03-13 12:35:00.633647445 +0000 UTC m=+6.380791769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.899232 master-0 kubenswrapper[4143]: E0313 12:35:10.899032 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66b9b764b693 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:01.108946579 +0000 UTC m=+6.856090903,LastTimestamp:2026-03-13 12:35:01.108946579 +0000 UTC m=+6.856090903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.903871 master-0 kubenswrapper[4143]: E0313 12:35:10.903767 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66b9c127c1a7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:01.272723879 +0000 UTC m=+7.019868203,LastTimestamp:2026-03-13 12:35:01.272723879 +0000 UTC m=+7.019868203,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.907971 master-0 kubenswrapper[4143]: E0313 12:35:10.907856 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66b9c1d97d3b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:01.284371771 +0000 UTC m=+7.031516095,LastTimestamp:2026-03-13 12:35:01.284371771 +0000 UTC m=+7.031516095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.913088 master-0 kubenswrapper[4143]: E0313 12:35:10.912946 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66b9b764b693\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66b9b764b693 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:01.108946579 +0000 UTC m=+6.856090903,LastTimestamp:2026-03-13 12:35:04.846908988 +0000 UTC m=+10.594053362,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.916627 master-0 kubenswrapper[4143]: E0313 12:35:10.916502 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c66ba9cc36f8f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.234s (7.234s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:04.957136783 +0000 UTC m=+10.704281107,LastTimestamp:2026-03-13 12:35:04.957136783 +0000 UTC m=+10.704281107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.920273 master-0 kubenswrapper[4143]: E0313 12:35:10.920172 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66baa0144cd7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.268s (7.268s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:05.012767959 +0000 UTC m=+10.759912283,LastTimestamp:2026-03-13 12:35:05.012767959 +0000 UTC m=+10.759912283,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.924333 master-0 kubenswrapper[4143]: E0313 12:35:10.924240 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66baa2c10f10 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.273s (7.273s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:05.057644304 +0000 UTC m=+10.804788628,LastTimestamp:2026-03-13 12:35:05.057644304 +0000 UTC m=+10.804788628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.929094 master-0 kubenswrapper[4143]: E0313 12:35:10.929000 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66b9c127c1a7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66b9c127c1a7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:01.272723879 +0000 UTC m=+7.019868203,LastTimestamp:2026-03-13 12:35:05.071810536 +0000 UTC m=+10.818954860,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.933249 master-0 kubenswrapper[4143]: E0313 12:35:10.933153 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66b9c1d97d3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66b9c1d97d3b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:01.284371771 +0000 UTC m=+7.031516095,LastTimestamp:2026-03-13 12:35:05.086258976 +0000 UTC m=+10.833403300,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.936783 master-0 kubenswrapper[4143]: E0313 12:35:10.936706 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c66baa71cb6c9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:05.130759881 +0000 UTC m=+10.877904205,LastTimestamp:2026-03-13 12:35:05.130759881 +0000 UTC m=+10.877904205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.940884 master-0 kubenswrapper[4143]: E0313 12:35:10.940773 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c66baa7d5865d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:05.142871645 +0000 UTC m=+10.890015959,LastTimestamp:2026-03-13 12:35:05.142871645 +0000 UTC m=+10.890015959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.945317 master-0 kubenswrapper[4143]: E0313 12:35:10.945207 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66baaeb8373e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:05.258391358 +0000 UTC m=+11.005535692,LastTimestamp:2026-03-13 12:35:05.258391358 +0000 UTC m=+11.005535692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.949791 master-0 kubenswrapper[4143]: E0313 12:35:10.949671 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66bab06cbaac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:05.2869987 +0000 UTC m=+11.034143044,LastTimestamp:2026-03-13 12:35:05.2869987 +0000 UTC m=+11.034143044,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.954882 master-0 kubenswrapper[4143]: E0313 12:35:10.954758 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66bab09bc076 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:05.290080374 +0000 UTC m=+11.037224738,LastTimestamp:2026-03-13 12:35:05.290080374 +0000 UTC m=+11.037224738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.959586 master-0 kubenswrapper[4143]: E0313 12:35:10.959478 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66bab0c434ba kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:05.292731578 +0000 UTC m=+11.039875952,LastTimestamp:2026-03-13 12:35:05.292731578 +0000 UTC m=+11.039875952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.964291 master-0 kubenswrapper[4143]: E0313 12:35:10.964160 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66bab1df6c83 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:05.311292547 +0000 UTC m=+11.058436881,LastTimestamp:2026-03-13 12:35:05.311292547 +0000 UTC m=+11.058436881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.968772 master-0 kubenswrapper[4143]: E0313 12:35:10.968652 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66bae2ee1df4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:06.13433906 +0000 UTC m=+11.881483384,LastTimestamp:2026-03-13 12:35:06.13433906 +0000 UTC m=+11.881483384,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.973342 master-0 kubenswrapper[4143]: E0313 12:35:10.973214 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66bae337d333 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:06.139169587 +0000 UTC m=+11.886313911,LastTimestamp:2026-03-13 12:35:06.139169587 +0000 UTC m=+11.886313911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.977961 master-0 kubenswrapper[4143]: E0313 12:35:10.977827 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66baecb90a44 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:06.298632772 +0000 UTC m=+12.045777096,LastTimestamp:2026-03-13 12:35:06.298632772 +0000 UTC m=+12.045777096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.982990 master-0 kubenswrapper[4143]: E0313 12:35:10.982834 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66baed9dc529 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:06.313622825 +0000 UTC m=+12.060767149,LastTimestamp:2026-03-13 12:35:06.313622825 +0000 UTC m=+12.060767149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.987891 master-0 kubenswrapper[4143]: E0313 12:35:10.987707 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66baedade4ff openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:06.314679551 +0000 UTC m=+12.061823875,LastTimestamp:2026-03-13 12:35:06.314679551 +0000 UTC m=+12.061823875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.991968 master-0 kubenswrapper[4143]: E0313 12:35:10.991852 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66bb15fed785 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\" in 1.698s (1.698s including waiting). Image size: 505242594 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:06.991073157 +0000 UTC m=+12.738217501,LastTimestamp:2026-03-13 12:35:06.991073157 +0000 UTC m=+12.738217501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:10.995789 master-0 kubenswrapper[4143]: E0313 12:35:10.995697 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66bae2ee1df4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66bae2ee1df4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:06.13433906 +0000 UTC m=+11.881483384,LastTimestamp:2026-03-13 12:35:07.147821007 +0000 UTC m=+12.894965331,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:11.000530 master-0 kubenswrapper[4143]: E0313 12:35:11.000387 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66bb206f5ff0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:07.166220272 +0000 UTC m=+12.913364586,LastTimestamp:2026-03-13 12:35:07.166220272 +0000 UTC m=+12.913364586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:11.004507 master-0 kubenswrapper[4143]: E0313 12:35:11.004447 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66bb2160357f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:07.182003583 +0000 UTC m=+12.929147907,LastTimestamp:2026-03-13 12:35:07.182003583 +0000 UTC m=+12.929147907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:11.009251 master-0 kubenswrapper[4143]: E0313 12:35:11.009129 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66bb5b27ffb5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:08.151398325 +0000 UTC m=+13.898542649,LastTimestamp:2026-03-13 12:35:08.151398325 +0000 UTC m=+13.898542649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:11.013113 master-0 kubenswrapper[4143]: E0313 12:35:11.013022 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189c66baaeb8373e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66baaeb8373e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:05.258391358 +0000 UTC m=+11.005535692,LastTimestamp:2026-03-13 12:35:08.396171554 +0000 UTC m=+14.143315878,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:11.017434 master-0 kubenswrapper[4143]: E0313 12:35:11.017311 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189c66bab09bc076\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66bab09bc076 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:05.290080374 +0000 UTC m=+11.037224738,LastTimestamp:2026-03-13 12:35:08.409205839 +0000 UTC m=+14.156350163,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:11.021578 master-0 kubenswrapper[4143]: E0313 12:35:11.021443 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66bba3ea8ab4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" in 3.057s (3.057s including waiting). Image size: 514980169 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:09.372107444 +0000 UTC m=+15.119251778,LastTimestamp:2026-03-13 12:35:09.372107444 +0000 UTC m=+15.119251778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:11.025702 master-0 kubenswrapper[4143]: E0313 12:35:11.025596 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66bbae3f7ceb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:09.545446635 +0000 UTC m=+15.292590959,LastTimestamp:2026-03-13 12:35:09.545446635 +0000 UTC m=+15.292590959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:11.029569 master-0 kubenswrapper[4143]: E0313 12:35:11.029463 4143 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66bbb0c3ba29 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:09.587667497 +0000 UTC m=+15.334811821,LastTimestamp:2026-03-13 12:35:09.587667497 +0000 UTC m=+15.334811821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:11.160357 master-0 kubenswrapper[4143]: I0313 12:35:11.160317 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:11.160815 master-0 kubenswrapper[4143]: I0313 12:35:11.160733 4143 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:35:11.160990 master-0 kubenswrapper[4143]: I0313 12:35:11.160333 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:11.161068 master-0 kubenswrapper[4143]: I0313 12:35:11.161045 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:11.161171 master-0 kubenswrapper[4143]: I0313 12:35:11.161153 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:11.161217 master-0 kubenswrapper[4143]: I0313 12:35:11.161173 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:11.161702 master-0 kubenswrapper[4143]: I0313 12:35:11.161652 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:11.161702 master-0 kubenswrapper[4143]: I0313 12:35:11.161700 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:11.161765 master-0 kubenswrapper[4143]: I0313 12:35:11.161710 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:11.410855 master-0 kubenswrapper[4143]: W0313 12:35:11.410785 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:11.411047 master-0 kubenswrapper[4143]: E0313 12:35:11.410890 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 12:35:11.506819 master-0 kubenswrapper[4143]: W0313 12:35:11.506772 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 13 12:35:11.506819 master-0 kubenswrapper[4143]: E0313 12:35:11.506817 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 13 12:35:11.848186 master-0 kubenswrapper[4143]: I0313 12:35:11.848080 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:12.162159 master-0 kubenswrapper[4143]: I0313 12:35:12.162118 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:12.163117 master-0 kubenswrapper[4143]: I0313 12:35:12.163062 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:12.163215 master-0 kubenswrapper[4143]: I0313 12:35:12.163170 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:12.163215 master-0 kubenswrapper[4143]: I0313 12:35:12.163191 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:12.621045 master-0 kubenswrapper[4143]: I0313 12:35:12.620916 4143 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:35:12.621431 master-0 kubenswrapper[4143]: I0313 12:35:12.621225 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:12.622484 master-0 kubenswrapper[4143]: I0313 12:35:12.622457 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:12.622484 master-0 kubenswrapper[4143]: I0313 12:35:12.622491 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:12.622674 master-0 kubenswrapper[4143]: I0313 12:35:12.622501 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:12.627661 master-0 kubenswrapper[4143]: I0313 12:35:12.627604 4143 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:35:12.667448 master-0 kubenswrapper[4143]: I0313 12:35:12.667363 4143 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:35:12.672463 master-0 kubenswrapper[4143]: I0313 12:35:12.672404 4143 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:35:12.762716 master-0 kubenswrapper[4143]: W0313 12:35:12.762645 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 13 12:35:12.762716 master-0 kubenswrapper[4143]: E0313 12:35:12.762719 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 13 12:35:12.847556 master-0 kubenswrapper[4143]: I0313 12:35:12.847396 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:13.163828 master-0 kubenswrapper[4143]: I0313 12:35:13.163772 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:13.164413 master-0 kubenswrapper[4143]: I0313 12:35:13.164390 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:13.164458 master-0 kubenswrapper[4143]: I0313 12:35:13.164422 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:13.164458 master-0 kubenswrapper[4143]: I0313 12:35:13.164434 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:13.849250 master-0 kubenswrapper[4143]: I0313 12:35:13.848849 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:13.913957 master-0 kubenswrapper[4143]: I0313 12:35:13.913855 4143 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:35:13.914260 master-0 kubenswrapper[4143]: I0313 12:35:13.914065 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:13.915513 master-0 kubenswrapper[4143]: I0313 12:35:13.915463 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:13.915513 master-0 kubenswrapper[4143]: I0313 12:35:13.915512 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:13.915665 master-0 kubenswrapper[4143]: I0313 12:35:13.915526 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:14.167391 master-0 kubenswrapper[4143]: I0313 12:35:14.167177 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:14.168412 master-0 kubenswrapper[4143]: I0313 12:35:14.168370 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:14.168412 master-0 kubenswrapper[4143]: I0313 12:35:14.168407 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:14.168549 master-0 kubenswrapper[4143]: I0313 12:35:14.168424 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:14.815434 master-0 kubenswrapper[4143]: I0313 12:35:14.815343 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:14.816690 master-0 kubenswrapper[4143]: I0313 12:35:14.816608 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:14.816814 master-0 kubenswrapper[4143]: I0313 12:35:14.816698 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:14.816814 master-0 kubenswrapper[4143]: I0313 12:35:14.816726 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:14.816814 master-0 kubenswrapper[4143]: I0313 12:35:14.816815 4143 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:35:14.822175 master-0 kubenswrapper[4143]: E0313 12:35:14.822073 4143 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 13 12:35:14.822770 master-0 kubenswrapper[4143]: E0313 12:35:14.822728 4143 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 12:35:14.850325 master-0 kubenswrapper[4143]: I0313 12:35:14.850277 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:15.050472 master-0 kubenswrapper[4143]: E0313 12:35:15.050411 4143 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 12:35:15.846907 master-0 kubenswrapper[4143]: I0313 12:35:15.846863 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:15.895155 master-0 kubenswrapper[4143]: I0313 12:35:15.895057 4143 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:35:15.895414 master-0 kubenswrapper[4143]: I0313 12:35:15.895374 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:15.896672 master-0 kubenswrapper[4143]: I0313 12:35:15.896600 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:15.896672 master-0 kubenswrapper[4143]: I0313 12:35:15.896640 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:15.896672 master-0 kubenswrapper[4143]: I0313 12:35:15.896653 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:15.898869 master-0 kubenswrapper[4143]: I0313 12:35:15.898851 4143 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:35:16.170947 master-0 kubenswrapper[4143]: I0313 12:35:16.170813 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:16.172010 master-0 kubenswrapper[4143]: I0313 12:35:16.171968 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:16.172010 master-0 kubenswrapper[4143]: I0313 12:35:16.172011 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:16.172175 master-0 kubenswrapper[4143]: I0313 12:35:16.172025 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:16.847568 master-0 kubenswrapper[4143]: I0313 12:35:16.847509 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:17.845563 master-0 kubenswrapper[4143]: I0313 12:35:17.845487 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:18.611587 master-0 kubenswrapper[4143]: I0313 12:35:18.611470 4143 csr.go:261] certificate signing request csr-kbwpn is approved, waiting to be issued Mar 13 12:35:18.846380 master-0 kubenswrapper[4143]: I0313 12:35:18.846328 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:19.846392 master-0 kubenswrapper[4143]: I0313 12:35:19.846342 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:20.847023 master-0 kubenswrapper[4143]: I0313 12:35:20.846961 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:21.082307 master-0 kubenswrapper[4143]: I0313 12:35:21.082197 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:21.083503 master-0 kubenswrapper[4143]: I0313 12:35:21.083463 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:21.083556 master-0 kubenswrapper[4143]: I0313 12:35:21.083518 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:21.083556 master-0 kubenswrapper[4143]: I0313 12:35:21.083545 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:21.083941 master-0 kubenswrapper[4143]: I0313 12:35:21.083907 4143 scope.go:117] "RemoveContainer" containerID="ae4874222b05b1b8dbd82518131214bc2e05907a9a188ed0c3e21953b82f48b2" Mar 13 12:35:21.092448 master-0 kubenswrapper[4143]: E0313 12:35:21.092342 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66b9b764b693\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66b9b764b693 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:01.108946579 +0000 UTC m=+6.856090903,LastTimestamp:2026-03-13 12:35:21.086872962 +0000 UTC m=+26.834017286,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:21.250640 master-0 kubenswrapper[4143]: E0313 12:35:21.250516 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66b9c127c1a7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66b9c127c1a7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:01.272723879 +0000 UTC m=+7.019868203,LastTimestamp:2026-03-13 12:35:21.243244854 +0000 UTC m=+26.990389198,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:21.259254 master-0 kubenswrapper[4143]: E0313 12:35:21.259114 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66b9c1d97d3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66b9c1d97d3b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:01.284371771 +0000 UTC m=+7.031516095,LastTimestamp:2026-03-13 12:35:21.254704831 +0000 UTC m=+27.001849155,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:21.822328 master-0 kubenswrapper[4143]: I0313 12:35:21.822209 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:21.823594 master-0 kubenswrapper[4143]: I0313 12:35:21.823532 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:21.823594 master-0 kubenswrapper[4143]: I0313 12:35:21.823580 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:21.823594 master-0 kubenswrapper[4143]: I0313 12:35:21.823591 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:21.823795 master-0 kubenswrapper[4143]: I0313 12:35:21.823655 4143 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:35:21.827873 master-0 kubenswrapper[4143]: E0313 12:35:21.827819 4143 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 13 12:35:21.827964 master-0 kubenswrapper[4143]: E0313 12:35:21.827941 4143 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 12:35:21.842746 master-0 kubenswrapper[4143]: I0313 12:35:21.842677 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:22.063386 master-0 kubenswrapper[4143]: I0313 12:35:22.063320 4143 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:35:22.063850 master-0 kubenswrapper[4143]: I0313 12:35:22.063550 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:22.064896 master-0 kubenswrapper[4143]: I0313 12:35:22.064863 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:22.064896 master-0 kubenswrapper[4143]: I0313 12:35:22.064896 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:22.064982 master-0 kubenswrapper[4143]: I0313 12:35:22.064907 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:22.067707 master-0 kubenswrapper[4143]: I0313 12:35:22.067678 4143 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:35:22.185604 master-0 kubenswrapper[4143]: I0313 12:35:22.185502 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 12:35:22.185953 master-0 kubenswrapper[4143]: I0313 12:35:22.185928 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 13 12:35:22.186293 master-0 kubenswrapper[4143]: I0313 12:35:22.186264 4143 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="9c887f2b6cfcfcc1f3ea186daee81cbe3bce3c155cfd4e9bbac88f712c489339" exitCode=1 Mar 13 12:35:22.186348 master-0 kubenswrapper[4143]: I0313 12:35:22.186344 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:22.186399 master-0 kubenswrapper[4143]: I0313 12:35:22.186350 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"9c887f2b6cfcfcc1f3ea186daee81cbe3bce3c155cfd4e9bbac88f712c489339"} Mar 13 12:35:22.186444 master-0 kubenswrapper[4143]: I0313 12:35:22.186412 4143 scope.go:117] "RemoveContainer" containerID="ae4874222b05b1b8dbd82518131214bc2e05907a9a188ed0c3e21953b82f48b2" Mar 13 12:35:22.186486 master-0 kubenswrapper[4143]: I0313 12:35:22.186466 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:22.187111 master-0 kubenswrapper[4143]: I0313 12:35:22.187082 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:22.187111 master-0 kubenswrapper[4143]: I0313 12:35:22.187107 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:22.187232 master-0 kubenswrapper[4143]: I0313 12:35:22.187119 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:22.187232 master-0 kubenswrapper[4143]: I0313 12:35:22.187086 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:22.187232 master-0 kubenswrapper[4143]: I0313 12:35:22.187172 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:22.187232 master-0 kubenswrapper[4143]: I0313 12:35:22.187185 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:22.187582 master-0 kubenswrapper[4143]: I0313 12:35:22.187554 4143 scope.go:117] "RemoveContainer" containerID="9c887f2b6cfcfcc1f3ea186daee81cbe3bce3c155cfd4e9bbac88f712c489339" Mar 13 12:35:22.187731 master-0 kubenswrapper[4143]: E0313 12:35:22.187688 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 12:35:22.193242 master-0 kubenswrapper[4143]: E0313 12:35:22.193093 4143 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66bae2ee1df4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66bae2ee1df4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:35:06.13433906 +0000 UTC m=+11.881483384,LastTimestamp:2026-03-13 12:35:22.187659081 +0000 UTC m=+27.934803405,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:35:22.849592 master-0 kubenswrapper[4143]: I0313 12:35:22.849497 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:23.191771 master-0 kubenswrapper[4143]: I0313 12:35:23.191549 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 12:35:23.849731 master-0 kubenswrapper[4143]: I0313 12:35:23.849620 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:24.848598 master-0 kubenswrapper[4143]: I0313 12:35:24.848528 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:25.051494 master-0 kubenswrapper[4143]: E0313 12:35:25.051391 4143 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 12:35:25.848287 master-0 kubenswrapper[4143]: I0313 12:35:25.848243 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:26.176985 master-0 kubenswrapper[4143]: W0313 12:35:26.176780 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 13 12:35:26.176985 master-0 kubenswrapper[4143]: E0313 12:35:26.176916 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 12:35:26.911586 master-0 kubenswrapper[4143]: I0313 12:35:26.911529 4143 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:35:27.095045 master-0 kubenswrapper[4143]: W0313 12:35:27.095001 4143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 13 12:35:27.095266 master-0 kubenswrapper[4143]: E0313 12:35:27.095055 4143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 13 12:35:27.188160 master-0 kubenswrapper[4143]: I0313 12:35:27.188060 4143 csr.go:257] certificate signing request csr-kbwpn is issued Mar 13 12:35:27.686859 master-0 kubenswrapper[4143]: I0313 12:35:27.686756 4143 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 13 12:35:27.849600 master-0 kubenswrapper[4143]: I0313 12:35:27.849553 4143 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:35:27.864046 master-0 kubenswrapper[4143]: I0313 12:35:27.863986 4143 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:35:27.921747 master-0 kubenswrapper[4143]: I0313 12:35:27.921697 4143 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:35:28.184724 master-0 kubenswrapper[4143]: I0313 12:35:28.184678 4143 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:35:28.184724 master-0 kubenswrapper[4143]: E0313 12:35:28.184717 4143 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 13 12:35:28.189858 master-0 kubenswrapper[4143]: I0313 12:35:28.189810 4143 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-14 12:26:40 +0000 UTC, rotation deadline is 2026-03-14 05:46:31.188054289 +0000 UTC Mar 13 12:35:28.189858 master-0 kubenswrapper[4143]: I0313 12:35:28.189851 4143 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h11m2.998206786s for next certificate rotation Mar 13 12:35:28.395799 master-0 kubenswrapper[4143]: I0313 12:35:28.395733 4143 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 12:35:28.402573 master-0 kubenswrapper[4143]: I0313 12:35:28.402533 4143 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:35:28.420350 master-0 kubenswrapper[4143]: I0313 12:35:28.420300 4143 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:35:28.479527 master-0 kubenswrapper[4143]: I0313 12:35:28.479399 4143 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:35:28.754560 master-0 kubenswrapper[4143]: I0313 12:35:28.754451 4143 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:35:28.754560 master-0 kubenswrapper[4143]: E0313 12:35:28.754499 4143 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 13 12:35:28.828989 master-0 kubenswrapper[4143]: I0313 12:35:28.828905 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:28.830407 master-0 kubenswrapper[4143]: I0313 12:35:28.830350 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:28.830407 master-0 kubenswrapper[4143]: I0313 12:35:28.830395 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:28.830407 master-0 kubenswrapper[4143]: I0313 12:35:28.830403 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:28.830544 master-0 kubenswrapper[4143]: I0313 12:35:28.830479 4143 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:35:28.832978 master-0 kubenswrapper[4143]: E0313 12:35:28.832935 4143 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 13 12:35:28.838903 master-0 kubenswrapper[4143]: I0313 12:35:28.838866 4143 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 13 12:35:28.838903 master-0 kubenswrapper[4143]: E0313 12:35:28.838888 4143 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 13 12:35:28.914313 master-0 kubenswrapper[4143]: E0313 12:35:28.914273 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:29.015329 master-0 kubenswrapper[4143]: E0313 12:35:29.015155 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:29.116311 master-0 kubenswrapper[4143]: E0313 12:35:29.116255 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:29.217105 master-0 kubenswrapper[4143]: E0313 12:35:29.217051 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:29.318027 master-0 kubenswrapper[4143]: E0313 12:35:29.317958 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:29.419090 master-0 kubenswrapper[4143]: E0313 12:35:29.419029 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:29.519685 master-0 kubenswrapper[4143]: E0313 12:35:29.519625 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:29.631313 master-0 kubenswrapper[4143]: E0313 12:35:29.631186 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:29.731954 master-0 kubenswrapper[4143]: E0313 12:35:29.731899 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:29.832263 master-0 kubenswrapper[4143]: E0313 12:35:29.832200 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:29.906907 master-0 kubenswrapper[4143]: I0313 12:35:29.906782 4143 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 13 12:35:29.914662 master-0 kubenswrapper[4143]: I0313 12:35:29.914617 4143 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 13 12:35:29.932635 master-0 kubenswrapper[4143]: E0313 12:35:29.932539 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:30.033190 master-0 kubenswrapper[4143]: E0313 12:35:30.033155 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:30.133937 master-0 kubenswrapper[4143]: E0313 12:35:30.133875 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:30.234917 master-0 kubenswrapper[4143]: E0313 12:35:30.234783 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:30.335741 master-0 kubenswrapper[4143]: E0313 12:35:30.335668 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:30.435882 master-0 kubenswrapper[4143]: E0313 12:35:30.435833 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:30.536925 master-0 kubenswrapper[4143]: E0313 12:35:30.536850 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:30.637319 master-0 kubenswrapper[4143]: E0313 12:35:30.637270 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:30.738446 master-0 kubenswrapper[4143]: E0313 12:35:30.738386 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:30.839288 master-0 kubenswrapper[4143]: E0313 12:35:30.839163 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:30.939896 master-0 kubenswrapper[4143]: E0313 12:35:30.939777 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:31.040398 master-0 kubenswrapper[4143]: E0313 12:35:31.040332 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:31.141235 master-0 kubenswrapper[4143]: E0313 12:35:31.141095 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:31.241932 master-0 kubenswrapper[4143]: E0313 12:35:31.241839 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:31.342811 master-0 kubenswrapper[4143]: E0313 12:35:31.342697 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:31.443018 master-0 kubenswrapper[4143]: E0313 12:35:31.442897 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:31.543686 master-0 kubenswrapper[4143]: E0313 12:35:31.543573 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:31.644210 master-0 kubenswrapper[4143]: E0313 12:35:31.644111 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:31.744901 master-0 kubenswrapper[4143]: E0313 12:35:31.744721 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:31.845685 master-0 kubenswrapper[4143]: E0313 12:35:31.845562 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:31.946249 master-0 kubenswrapper[4143]: E0313 12:35:31.946155 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:32.042987 master-0 kubenswrapper[4143]: I0313 12:35:32.042918 4143 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 12:35:32.046310 master-0 kubenswrapper[4143]: E0313 12:35:32.046270 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:32.147211 master-0 kubenswrapper[4143]: E0313 12:35:32.147132 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:32.248009 master-0 kubenswrapper[4143]: E0313 12:35:32.247952 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:32.349111 master-0 kubenswrapper[4143]: E0313 12:35:32.348958 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:32.450128 master-0 kubenswrapper[4143]: E0313 12:35:32.450050 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:32.550539 master-0 kubenswrapper[4143]: E0313 12:35:32.550477 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:32.651489 master-0 kubenswrapper[4143]: E0313 12:35:32.651338 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:32.751907 master-0 kubenswrapper[4143]: E0313 12:35:32.751621 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:32.852468 master-0 kubenswrapper[4143]: E0313 12:35:32.852388 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:32.953123 master-0 kubenswrapper[4143]: E0313 12:35:32.952991 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:33.053770 master-0 kubenswrapper[4143]: E0313 12:35:33.053660 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:33.082535 master-0 kubenswrapper[4143]: I0313 12:35:33.082428 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:33.083882 master-0 kubenswrapper[4143]: I0313 12:35:33.083811 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:33.083882 master-0 kubenswrapper[4143]: I0313 12:35:33.083846 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:33.083882 master-0 kubenswrapper[4143]: I0313 12:35:33.083855 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:33.084247 master-0 kubenswrapper[4143]: I0313 12:35:33.084216 4143 scope.go:117] "RemoveContainer" containerID="9c887f2b6cfcfcc1f3ea186daee81cbe3bce3c155cfd4e9bbac88f712c489339" Mar 13 12:35:33.084448 master-0 kubenswrapper[4143]: E0313 12:35:33.084408 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 12:35:33.154114 master-0 kubenswrapper[4143]: E0313 12:35:33.154016 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:33.255325 master-0 kubenswrapper[4143]: E0313 12:35:33.255126 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:33.356183 master-0 kubenswrapper[4143]: E0313 12:35:33.356099 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:33.456681 master-0 kubenswrapper[4143]: E0313 12:35:33.456601 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:33.557429 master-0 kubenswrapper[4143]: E0313 12:35:33.557358 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:33.658398 master-0 kubenswrapper[4143]: E0313 12:35:33.658316 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:33.759343 master-0 kubenswrapper[4143]: E0313 12:35:33.759233 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:33.860319 master-0 kubenswrapper[4143]: E0313 12:35:33.860130 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:33.960448 master-0 kubenswrapper[4143]: E0313 12:35:33.960356 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:34.061338 master-0 kubenswrapper[4143]: E0313 12:35:34.061282 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:34.162347 master-0 kubenswrapper[4143]: E0313 12:35:34.162184 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:34.263186 master-0 kubenswrapper[4143]: E0313 12:35:34.263087 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:34.363364 master-0 kubenswrapper[4143]: E0313 12:35:34.363271 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:34.464110 master-0 kubenswrapper[4143]: E0313 12:35:34.463921 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:34.564969 master-0 kubenswrapper[4143]: E0313 12:35:34.564708 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:34.665660 master-0 kubenswrapper[4143]: E0313 12:35:34.665582 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:34.766261 master-0 kubenswrapper[4143]: E0313 12:35:34.766084 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:34.867278 master-0 kubenswrapper[4143]: E0313 12:35:34.867182 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:34.968219 master-0 kubenswrapper[4143]: E0313 12:35:34.968110 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:35.051678 master-0 kubenswrapper[4143]: E0313 12:35:35.051577 4143 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 12:35:35.068994 master-0 kubenswrapper[4143]: E0313 12:35:35.068928 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:35.169547 master-0 kubenswrapper[4143]: E0313 12:35:35.169495 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:35.270722 master-0 kubenswrapper[4143]: E0313 12:35:35.270647 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:35.371266 master-0 kubenswrapper[4143]: E0313 12:35:35.371024 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:35.471879 master-0 kubenswrapper[4143]: E0313 12:35:35.471796 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:35.572237 master-0 kubenswrapper[4143]: E0313 12:35:35.572165 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:35.672942 master-0 kubenswrapper[4143]: E0313 12:35:35.672786 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:35.773923 master-0 kubenswrapper[4143]: E0313 12:35:35.773825 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:35.874835 master-0 kubenswrapper[4143]: E0313 12:35:35.874752 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:35.975956 master-0 kubenswrapper[4143]: E0313 12:35:35.975844 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:36.076956 master-0 kubenswrapper[4143]: E0313 12:35:36.076873 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:36.177272 master-0 kubenswrapper[4143]: E0313 12:35:36.177213 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:36.278279 master-0 kubenswrapper[4143]: E0313 12:35:36.278208 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:36.379455 master-0 kubenswrapper[4143]: E0313 12:35:36.379329 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:36.479672 master-0 kubenswrapper[4143]: E0313 12:35:36.479561 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:36.580522 master-0 kubenswrapper[4143]: E0313 12:35:36.580321 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:36.681457 master-0 kubenswrapper[4143]: E0313 12:35:36.681372 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:36.782636 master-0 kubenswrapper[4143]: E0313 12:35:36.782532 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:36.883602 master-0 kubenswrapper[4143]: E0313 12:35:36.883474 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:36.984634 master-0 kubenswrapper[4143]: E0313 12:35:36.984554 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:37.085434 master-0 kubenswrapper[4143]: E0313 12:35:37.085365 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:37.186384 master-0 kubenswrapper[4143]: E0313 12:35:37.186244 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:37.287285 master-0 kubenswrapper[4143]: E0313 12:35:37.287199 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:37.388304 master-0 kubenswrapper[4143]: E0313 12:35:37.388221 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:37.488915 master-0 kubenswrapper[4143]: E0313 12:35:37.488761 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:37.588932 master-0 kubenswrapper[4143]: E0313 12:35:37.588867 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:37.690084 master-0 kubenswrapper[4143]: E0313 12:35:37.689975 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:37.790646 master-0 kubenswrapper[4143]: E0313 12:35:37.790581 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:37.891677 master-0 kubenswrapper[4143]: E0313 12:35:37.891617 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:37.991909 master-0 kubenswrapper[4143]: E0313 12:35:37.991803 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:38.093164 master-0 kubenswrapper[4143]: E0313 12:35:38.092981 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:38.193855 master-0 kubenswrapper[4143]: E0313 12:35:38.193779 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:38.294716 master-0 kubenswrapper[4143]: E0313 12:35:38.294638 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:38.395857 master-0 kubenswrapper[4143]: E0313 12:35:38.395656 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:38.496222 master-0 kubenswrapper[4143]: E0313 12:35:38.496119 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:38.596986 master-0 kubenswrapper[4143]: E0313 12:35:38.596906 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:38.697720 master-0 kubenswrapper[4143]: E0313 12:35:38.697597 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:38.798563 master-0 kubenswrapper[4143]: E0313 12:35:38.798503 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:38.898911 master-0 kubenswrapper[4143]: E0313 12:35:38.898849 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:38.999656 master-0 kubenswrapper[4143]: E0313 12:35:38.999535 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:39.100613 master-0 kubenswrapper[4143]: E0313 12:35:39.100540 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:39.110972 master-0 kubenswrapper[4143]: E0313 12:35:39.110897 4143 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 13 12:35:39.201654 master-0 kubenswrapper[4143]: E0313 12:35:39.201591 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:39.301921 master-0 kubenswrapper[4143]: E0313 12:35:39.301861 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:39.402775 master-0 kubenswrapper[4143]: E0313 12:35:39.402712 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:39.446709 master-0 kubenswrapper[4143]: I0313 12:35:39.446661 4143 csr.go:261] certificate signing request csr-zzgcb is approved, waiting to be issued Mar 13 12:35:39.455263 master-0 kubenswrapper[4143]: I0313 12:35:39.455222 4143 csr.go:257] certificate signing request csr-zzgcb is issued Mar 13 12:35:39.503099 master-0 kubenswrapper[4143]: E0313 12:35:39.503042 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:39.603298 master-0 kubenswrapper[4143]: E0313 12:35:39.603163 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:39.704380 master-0 kubenswrapper[4143]: E0313 12:35:39.704322 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:39.804837 master-0 kubenswrapper[4143]: E0313 12:35:39.804784 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:39.905379 master-0 kubenswrapper[4143]: E0313 12:35:39.905206 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:40.006331 master-0 kubenswrapper[4143]: E0313 12:35:40.006274 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:40.106581 master-0 kubenswrapper[4143]: E0313 12:35:40.106541 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:40.207415 master-0 kubenswrapper[4143]: E0313 12:35:40.207249 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:40.308472 master-0 kubenswrapper[4143]: E0313 12:35:40.308390 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:40.409715 master-0 kubenswrapper[4143]: E0313 12:35:40.409419 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:40.456823 master-0 kubenswrapper[4143]: I0313 12:35:40.456696 4143 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 12:26:40 +0000 UTC, rotation deadline is 2026-03-14 06:38:03.3365402 +0000 UTC Mar 13 12:35:40.456823 master-0 kubenswrapper[4143]: I0313 12:35:40.456777 4143 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h2m22.879773971s for next certificate rotation Mar 13 12:35:40.510658 master-0 kubenswrapper[4143]: E0313 12:35:40.510439 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:40.611367 master-0 kubenswrapper[4143]: E0313 12:35:40.611266 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:40.712290 master-0 kubenswrapper[4143]: E0313 12:35:40.712208 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:40.812768 master-0 kubenswrapper[4143]: E0313 12:35:40.812684 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:40.913667 master-0 kubenswrapper[4143]: E0313 12:35:40.913553 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:41.014579 master-0 kubenswrapper[4143]: E0313 12:35:41.014462 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:41.114844 master-0 kubenswrapper[4143]: E0313 12:35:41.114663 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:41.215487 master-0 kubenswrapper[4143]: E0313 12:35:41.215397 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:41.316630 master-0 kubenswrapper[4143]: E0313 12:35:41.316528 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:41.417903 master-0 kubenswrapper[4143]: E0313 12:35:41.417686 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:41.457581 master-0 kubenswrapper[4143]: I0313 12:35:41.457474 4143 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 12:26:40 +0000 UTC, rotation deadline is 2026-03-14 07:29:51.102183078 +0000 UTC Mar 13 12:35:41.457581 master-0 kubenswrapper[4143]: I0313 12:35:41.457530 4143 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h54m9.644658071s for next certificate rotation Mar 13 12:35:41.518202 master-0 kubenswrapper[4143]: E0313 12:35:41.518099 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:41.619035 master-0 kubenswrapper[4143]: E0313 12:35:41.618931 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:41.719409 master-0 kubenswrapper[4143]: E0313 12:35:41.719234 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:41.820073 master-0 kubenswrapper[4143]: E0313 12:35:41.819989 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:41.920695 master-0 kubenswrapper[4143]: E0313 12:35:41.920595 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:42.021629 master-0 kubenswrapper[4143]: E0313 12:35:42.021572 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:42.122107 master-0 kubenswrapper[4143]: E0313 12:35:42.122025 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:42.223334 master-0 kubenswrapper[4143]: E0313 12:35:42.223193 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:42.324256 master-0 kubenswrapper[4143]: E0313 12:35:42.324107 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:42.424409 master-0 kubenswrapper[4143]: E0313 12:35:42.424368 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:42.524598 master-0 kubenswrapper[4143]: E0313 12:35:42.524496 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:42.625816 master-0 kubenswrapper[4143]: E0313 12:35:42.625565 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:42.726647 master-0 kubenswrapper[4143]: E0313 12:35:42.726559 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:42.827188 master-0 kubenswrapper[4143]: E0313 12:35:42.827038 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:42.927815 master-0 kubenswrapper[4143]: E0313 12:35:42.927650 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:43.028919 master-0 kubenswrapper[4143]: E0313 12:35:43.028797 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:43.129841 master-0 kubenswrapper[4143]: E0313 12:35:43.129745 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:43.230080 master-0 kubenswrapper[4143]: E0313 12:35:43.229905 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:43.330865 master-0 kubenswrapper[4143]: E0313 12:35:43.330786 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:43.432002 master-0 kubenswrapper[4143]: E0313 12:35:43.431938 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:43.532091 master-0 kubenswrapper[4143]: E0313 12:35:43.532036 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:43.632891 master-0 kubenswrapper[4143]: E0313 12:35:43.632812 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:43.733287 master-0 kubenswrapper[4143]: E0313 12:35:43.733116 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:43.834415 master-0 kubenswrapper[4143]: E0313 12:35:43.834157 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:43.934879 master-0 kubenswrapper[4143]: E0313 12:35:43.934798 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:44.035872 master-0 kubenswrapper[4143]: E0313 12:35:44.035763 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:44.136358 master-0 kubenswrapper[4143]: E0313 12:35:44.136186 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:44.237331 master-0 kubenswrapper[4143]: E0313 12:35:44.237235 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:44.338498 master-0 kubenswrapper[4143]: E0313 12:35:44.338382 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:44.439112 master-0 kubenswrapper[4143]: E0313 12:35:44.438978 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:44.539394 master-0 kubenswrapper[4143]: E0313 12:35:44.539297 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:44.640568 master-0 kubenswrapper[4143]: E0313 12:35:44.640495 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:44.741669 master-0 kubenswrapper[4143]: E0313 12:35:44.741491 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:44.841762 master-0 kubenswrapper[4143]: E0313 12:35:44.841662 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:44.942284 master-0 kubenswrapper[4143]: E0313 12:35:44.942182 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:45.043117 master-0 kubenswrapper[4143]: E0313 12:35:45.043021 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:45.052727 master-0 kubenswrapper[4143]: E0313 12:35:45.052633 4143 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 12:35:45.144303 master-0 kubenswrapper[4143]: E0313 12:35:45.144201 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:45.245359 master-0 kubenswrapper[4143]: E0313 12:35:45.245299 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:45.346452 master-0 kubenswrapper[4143]: E0313 12:35:45.346288 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:45.446692 master-0 kubenswrapper[4143]: E0313 12:35:45.446606 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:45.547710 master-0 kubenswrapper[4143]: E0313 12:35:45.547623 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:45.648834 master-0 kubenswrapper[4143]: E0313 12:35:45.648656 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:45.749788 master-0 kubenswrapper[4143]: E0313 12:35:45.749721 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:45.850168 master-0 kubenswrapper[4143]: E0313 12:35:45.850040 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:45.950715 master-0 kubenswrapper[4143]: E0313 12:35:45.950536 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:46.051236 master-0 kubenswrapper[4143]: E0313 12:35:46.051183 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:46.081740 master-0 kubenswrapper[4143]: I0313 12:35:46.081666 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:46.084292 master-0 kubenswrapper[4143]: I0313 12:35:46.084260 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:46.084423 master-0 kubenswrapper[4143]: I0313 12:35:46.084309 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:46.084423 master-0 kubenswrapper[4143]: I0313 12:35:46.084327 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:46.085316 master-0 kubenswrapper[4143]: I0313 12:35:46.085287 4143 scope.go:117] "RemoveContainer" containerID="9c887f2b6cfcfcc1f3ea186daee81cbe3bce3c155cfd4e9bbac88f712c489339" Mar 13 12:35:46.152158 master-0 kubenswrapper[4143]: E0313 12:35:46.152014 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:46.253077 master-0 kubenswrapper[4143]: E0313 12:35:46.252921 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:46.353530 master-0 kubenswrapper[4143]: E0313 12:35:46.353438 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:46.437571 master-0 kubenswrapper[4143]: I0313 12:35:46.437481 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 12:35:46.438246 master-0 kubenswrapper[4143]: I0313 12:35:46.438204 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"958b1ab7ab943f0d9820d78ce8605298936c74cbbe3326599eac945aeec4ecce"} Mar 13 12:35:46.438402 master-0 kubenswrapper[4143]: I0313 12:35:46.438372 4143 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:35:46.439452 master-0 kubenswrapper[4143]: I0313 12:35:46.439405 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:35:46.439516 master-0 kubenswrapper[4143]: I0313 12:35:46.439458 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:35:46.439516 master-0 kubenswrapper[4143]: I0313 12:35:46.439473 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:35:46.453983 master-0 kubenswrapper[4143]: E0313 12:35:46.453934 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:46.555079 master-0 kubenswrapper[4143]: E0313 12:35:46.554995 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:46.655864 master-0 kubenswrapper[4143]: E0313 12:35:46.655788 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:46.756605 master-0 kubenswrapper[4143]: E0313 12:35:46.756520 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:46.857916 master-0 kubenswrapper[4143]: E0313 12:35:46.857711 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:46.957962 master-0 kubenswrapper[4143]: E0313 12:35:46.957885 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:47.058623 master-0 kubenswrapper[4143]: E0313 12:35:47.058523 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:47.159010 master-0 kubenswrapper[4143]: E0313 12:35:47.158884 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:47.259183 master-0 kubenswrapper[4143]: E0313 12:35:47.259055 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:47.360231 master-0 kubenswrapper[4143]: E0313 12:35:47.360127 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:47.461220 master-0 kubenswrapper[4143]: E0313 12:35:47.460962 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:47.561751 master-0 kubenswrapper[4143]: E0313 12:35:47.561654 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:47.662751 master-0 kubenswrapper[4143]: E0313 12:35:47.662676 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:47.763503 master-0 kubenswrapper[4143]: E0313 12:35:47.763344 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:47.863801 master-0 kubenswrapper[4143]: E0313 12:35:47.863711 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:47.964637 master-0 kubenswrapper[4143]: E0313 12:35:47.964528 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:48.065823 master-0 kubenswrapper[4143]: E0313 12:35:48.065711 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:48.166720 master-0 kubenswrapper[4143]: E0313 12:35:48.166635 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:48.267894 master-0 kubenswrapper[4143]: E0313 12:35:48.267819 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:48.368966 master-0 kubenswrapper[4143]: E0313 12:35:48.368797 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:48.469042 master-0 kubenswrapper[4143]: E0313 12:35:48.468959 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:48.569745 master-0 kubenswrapper[4143]: E0313 12:35:48.569681 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:48.670471 master-0 kubenswrapper[4143]: E0313 12:35:48.670372 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:48.771581 master-0 kubenswrapper[4143]: E0313 12:35:48.771525 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:48.872096 master-0 kubenswrapper[4143]: E0313 12:35:48.872047 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:48.973192 master-0 kubenswrapper[4143]: E0313 12:35:48.973060 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:49.073980 master-0 kubenswrapper[4143]: E0313 12:35:49.073893 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:49.174481 master-0 kubenswrapper[4143]: E0313 12:35:49.174419 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:49.180847 master-0 kubenswrapper[4143]: E0313 12:35:49.180730 4143 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 13 12:35:49.275571 master-0 kubenswrapper[4143]: E0313 12:35:49.275510 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:49.376334 master-0 kubenswrapper[4143]: E0313 12:35:49.376229 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:49.477408 master-0 kubenswrapper[4143]: E0313 12:35:49.477304 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:49.578086 master-0 kubenswrapper[4143]: E0313 12:35:49.577898 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:49.678693 master-0 kubenswrapper[4143]: E0313 12:35:49.678625 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:49.779556 master-0 kubenswrapper[4143]: E0313 12:35:49.779492 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:49.879856 master-0 kubenswrapper[4143]: E0313 12:35:49.879656 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:49.980052 master-0 kubenswrapper[4143]: E0313 12:35:49.979963 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:50.081077 master-0 kubenswrapper[4143]: E0313 12:35:50.080986 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:50.181931 master-0 kubenswrapper[4143]: E0313 12:35:50.181736 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:50.282958 master-0 kubenswrapper[4143]: E0313 12:35:50.282874 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:50.383798 master-0 kubenswrapper[4143]: E0313 12:35:50.383684 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:50.484370 master-0 kubenswrapper[4143]: E0313 12:35:50.484222 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:50.585076 master-0 kubenswrapper[4143]: E0313 12:35:50.585018 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:50.686125 master-0 kubenswrapper[4143]: E0313 12:35:50.686032 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:50.786496 master-0 kubenswrapper[4143]: E0313 12:35:50.786425 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:50.886648 master-0 kubenswrapper[4143]: E0313 12:35:50.886560 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:50.987019 master-0 kubenswrapper[4143]: E0313 12:35:50.986942 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:51.087917 master-0 kubenswrapper[4143]: E0313 12:35:51.087760 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:51.188167 master-0 kubenswrapper[4143]: E0313 12:35:51.188046 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:51.288690 master-0 kubenswrapper[4143]: E0313 12:35:51.288615 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:51.389937 master-0 kubenswrapper[4143]: E0313 12:35:51.389700 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:51.490117 master-0 kubenswrapper[4143]: E0313 12:35:51.490037 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:51.590895 master-0 kubenswrapper[4143]: E0313 12:35:51.590785 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:51.691858 master-0 kubenswrapper[4143]: E0313 12:35:51.691708 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:51.792823 master-0 kubenswrapper[4143]: E0313 12:35:51.792720 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:51.893424 master-0 kubenswrapper[4143]: E0313 12:35:51.893347 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:51.994739 master-0 kubenswrapper[4143]: E0313 12:35:51.994560 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:52.095340 master-0 kubenswrapper[4143]: E0313 12:35:52.095251 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:52.196319 master-0 kubenswrapper[4143]: E0313 12:35:52.196241 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:52.297441 master-0 kubenswrapper[4143]: E0313 12:35:52.297336 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:52.398609 master-0 kubenswrapper[4143]: E0313 12:35:52.398500 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:52.499758 master-0 kubenswrapper[4143]: E0313 12:35:52.499668 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:52.600705 master-0 kubenswrapper[4143]: E0313 12:35:52.600500 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:52.701280 master-0 kubenswrapper[4143]: E0313 12:35:52.701212 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:52.802309 master-0 kubenswrapper[4143]: E0313 12:35:52.802194 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:52.902849 master-0 kubenswrapper[4143]: E0313 12:35:52.902663 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:53.003638 master-0 kubenswrapper[4143]: E0313 12:35:53.003565 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:53.104289 master-0 kubenswrapper[4143]: E0313 12:35:53.104185 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:53.205480 master-0 kubenswrapper[4143]: E0313 12:35:53.205308 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:53.305512 master-0 kubenswrapper[4143]: E0313 12:35:53.305425 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:53.405690 master-0 kubenswrapper[4143]: E0313 12:35:53.405608 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:53.506907 master-0 kubenswrapper[4143]: E0313 12:35:53.506728 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:53.607598 master-0 kubenswrapper[4143]: E0313 12:35:53.607506 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:53.708693 master-0 kubenswrapper[4143]: E0313 12:35:53.708592 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:53.809275 master-0 kubenswrapper[4143]: E0313 12:35:53.809120 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:53.914075 master-0 kubenswrapper[4143]: E0313 12:35:53.913642 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:54.014422 master-0 kubenswrapper[4143]: E0313 12:35:54.014364 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:54.114997 master-0 kubenswrapper[4143]: E0313 12:35:54.114866 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:54.215032 master-0 kubenswrapper[4143]: E0313 12:35:54.214937 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:54.315446 master-0 kubenswrapper[4143]: E0313 12:35:54.315366 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:54.416482 master-0 kubenswrapper[4143]: E0313 12:35:54.416352 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:54.517631 master-0 kubenswrapper[4143]: E0313 12:35:54.517526 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:54.618667 master-0 kubenswrapper[4143]: E0313 12:35:54.618594 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:54.719520 master-0 kubenswrapper[4143]: E0313 12:35:54.719315 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:54.820479 master-0 kubenswrapper[4143]: E0313 12:35:54.820392 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:54.921062 master-0 kubenswrapper[4143]: E0313 12:35:54.920994 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:55.021835 master-0 kubenswrapper[4143]: E0313 12:35:55.021771 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:55.053027 master-0 kubenswrapper[4143]: E0313 12:35:55.052928 4143 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 12:35:55.122497 master-0 kubenswrapper[4143]: E0313 12:35:55.122455 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:55.223506 master-0 kubenswrapper[4143]: E0313 12:35:55.223465 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:55.324715 master-0 kubenswrapper[4143]: E0313 12:35:55.324577 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:55.425464 master-0 kubenswrapper[4143]: E0313 12:35:55.425392 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:55.526654 master-0 kubenswrapper[4143]: E0313 12:35:55.526585 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:55.627476 master-0 kubenswrapper[4143]: E0313 12:35:55.627326 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:55.727517 master-0 kubenswrapper[4143]: E0313 12:35:55.727435 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:55.828294 master-0 kubenswrapper[4143]: E0313 12:35:55.828200 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:55.929379 master-0 kubenswrapper[4143]: E0313 12:35:55.929207 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:56.029891 master-0 kubenswrapper[4143]: E0313 12:35:56.029825 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:56.130108 master-0 kubenswrapper[4143]: E0313 12:35:56.130027 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:56.231060 master-0 kubenswrapper[4143]: E0313 12:35:56.230916 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:56.331797 master-0 kubenswrapper[4143]: E0313 12:35:56.331725 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:56.432625 master-0 kubenswrapper[4143]: E0313 12:35:56.432560 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:56.533499 master-0 kubenswrapper[4143]: E0313 12:35:56.533411 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:56.634282 master-0 kubenswrapper[4143]: E0313 12:35:56.634206 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:56.735461 master-0 kubenswrapper[4143]: E0313 12:35:56.735376 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:56.836498 master-0 kubenswrapper[4143]: E0313 12:35:56.836358 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:56.936897 master-0 kubenswrapper[4143]: E0313 12:35:56.936824 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:57.037511 master-0 kubenswrapper[4143]: E0313 12:35:57.037409 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:57.138199 master-0 kubenswrapper[4143]: E0313 12:35:57.137983 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:57.238397 master-0 kubenswrapper[4143]: E0313 12:35:57.238309 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:57.339407 master-0 kubenswrapper[4143]: E0313 12:35:57.339306 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:57.440423 master-0 kubenswrapper[4143]: E0313 12:35:57.440231 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:57.540921 master-0 kubenswrapper[4143]: E0313 12:35:57.540798 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:57.641898 master-0 kubenswrapper[4143]: E0313 12:35:57.641769 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:57.742228 master-0 kubenswrapper[4143]: E0313 12:35:57.742026 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:57.843130 master-0 kubenswrapper[4143]: E0313 12:35:57.843023 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:57.944183 master-0 kubenswrapper[4143]: E0313 12:35:57.944087 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:58.045034 master-0 kubenswrapper[4143]: E0313 12:35:58.044935 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:58.145151 master-0 kubenswrapper[4143]: E0313 12:35:58.145094 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:58.246015 master-0 kubenswrapper[4143]: E0313 12:35:58.245945 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:58.347188 master-0 kubenswrapper[4143]: E0313 12:35:58.346975 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:58.447696 master-0 kubenswrapper[4143]: E0313 12:35:58.447622 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:58.548384 master-0 kubenswrapper[4143]: E0313 12:35:58.548312 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:58.649219 master-0 kubenswrapper[4143]: E0313 12:35:58.649066 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:58.750067 master-0 kubenswrapper[4143]: E0313 12:35:58.750006 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:58.851043 master-0 kubenswrapper[4143]: E0313 12:35:58.850996 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:58.951798 master-0 kubenswrapper[4143]: E0313 12:35:58.951667 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:59.052243 master-0 kubenswrapper[4143]: E0313 12:35:59.052197 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:59.153349 master-0 kubenswrapper[4143]: E0313 12:35:59.153290 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:59.200692 master-0 kubenswrapper[4143]: E0313 12:35:59.200604 4143 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 13 12:35:59.254488 master-0 kubenswrapper[4143]: E0313 12:35:59.254287 4143 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:35:59.334592 master-0 kubenswrapper[4143]: I0313 12:35:59.334527 4143 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 12:35:59.924042 master-0 kubenswrapper[4143]: I0313 12:35:59.923928 4143 apiserver.go:52] "Watching apiserver" Mar 13 12:35:59.928879 master-0 kubenswrapper[4143]: I0313 12:35:59.928270 4143 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 12:35:59.930868 master-0 kubenswrapper[4143]: I0313 12:35:59.928556 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-bqsgz","openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt","openshift-network-operator/network-operator-7c649bf6d4-kh6n9"] Mar 13 12:35:59.930868 master-0 kubenswrapper[4143]: I0313 12:35:59.929470 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:35:59.930868 master-0 kubenswrapper[4143]: I0313 12:35:59.929726 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:35:59.930868 master-0 kubenswrapper[4143]: I0313 12:35:59.929775 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:35:59.935605 master-0 kubenswrapper[4143]: I0313 12:35:59.934081 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 13 12:35:59.935605 master-0 kubenswrapper[4143]: I0313 12:35:59.934422 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 12:35:59.935605 master-0 kubenswrapper[4143]: I0313 12:35:59.934643 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 13 12:35:59.935605 master-0 kubenswrapper[4143]: I0313 12:35:59.934712 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 13 12:35:59.935605 master-0 kubenswrapper[4143]: I0313 12:35:59.934735 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 12:35:59.935605 master-0 kubenswrapper[4143]: I0313 12:35:59.934802 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 12:35:59.935605 master-0 kubenswrapper[4143]: I0313 12:35:59.935081 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 12:35:59.935605 master-0 kubenswrapper[4143]: I0313 12:35:59.935424 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 12:35:59.936448 master-0 kubenswrapper[4143]: I0313 12:35:59.934767 4143 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 13 12:35:59.937272 master-0 kubenswrapper[4143]: I0313 12:35:59.934656 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 12:35:59.946071 master-0 kubenswrapper[4143]: I0313 12:35:59.945987 4143 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 13 12:36:00.037756 master-0 kubenswrapper[4143]: I0313 12:36:00.037681 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-522bl\" (UniqueName: \"kubernetes.io/projected/72ba330e-35ca-4d05-8641-a880bf30c0e7-kube-api-access-522bl\") pod \"assisted-installer-controller-bqsgz\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.037756 master-0 kubenswrapper[4143]: I0313 12:36:00.037740 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:00.037756 master-0 kubenswrapper[4143]: I0313 12:36:00.037767 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f39d7f76-0075-44c3-9101-eb2607cb176a-service-ca\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:00.038041 master-0 kubenswrapper[4143]: I0313 12:36:00.037789 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnw9d\" (UniqueName: \"kubernetes.io/projected/4dd0fc2f-f2ee-4447-a747-04a178288cf0-kube-api-access-fnw9d\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:36:00.038041 master-0 kubenswrapper[4143]: I0313 12:36:00.037813 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-ca-bundle\") pod \"assisted-installer-controller-bqsgz\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.038041 master-0 kubenswrapper[4143]: I0313 12:36:00.037836 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4dd0fc2f-f2ee-4447-a747-04a178288cf0-host-etc-kube\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:36:00.038041 master-0 kubenswrapper[4143]: I0313 12:36:00.037919 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-var-run-resolv-conf\") pod \"assisted-installer-controller-bqsgz\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.038041 master-0 kubenswrapper[4143]: I0313 12:36:00.037964 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-resolv-conf\") pod \"assisted-installer-controller-bqsgz\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.038041 master-0 kubenswrapper[4143]: I0313 12:36:00.037982 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:00.038041 master-0 kubenswrapper[4143]: I0313 12:36:00.038003 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:00.038041 master-0 kubenswrapper[4143]: I0313 12:36:00.038020 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f39d7f76-0075-44c3-9101-eb2607cb176a-kube-api-access\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:00.038041 master-0 kubenswrapper[4143]: I0313 12:36:00.038035 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4dd0fc2f-f2ee-4447-a747-04a178288cf0-metrics-tls\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:36:00.038335 master-0 kubenswrapper[4143]: I0313 12:36:00.038086 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-sno-bootstrap-files\") pod \"assisted-installer-controller-bqsgz\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.139158 master-0 kubenswrapper[4143]: I0313 12:36:00.139086 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnw9d\" (UniqueName: \"kubernetes.io/projected/4dd0fc2f-f2ee-4447-a747-04a178288cf0-kube-api-access-fnw9d\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:36:00.139158 master-0 kubenswrapper[4143]: I0313 12:36:00.139130 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-ca-bundle\") pod \"assisted-installer-controller-bqsgz\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.139434 master-0 kubenswrapper[4143]: I0313 12:36:00.139391 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4dd0fc2f-f2ee-4447-a747-04a178288cf0-host-etc-kube\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:36:00.139434 master-0 kubenswrapper[4143]: I0313 12:36:00.139430 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-var-run-resolv-conf\") pod \"assisted-installer-controller-bqsgz\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.139545 master-0 kubenswrapper[4143]: I0313 12:36:00.139438 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-ca-bundle\") pod \"assisted-installer-controller-bqsgz\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.139545 master-0 kubenswrapper[4143]: I0313 12:36:00.139487 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-resolv-conf\") pod \"assisted-installer-controller-bqsgz\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.139545 master-0 kubenswrapper[4143]: I0313 12:36:00.139500 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4dd0fc2f-f2ee-4447-a747-04a178288cf0-host-etc-kube\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:36:00.139659 master-0 kubenswrapper[4143]: I0313 12:36:00.139545 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-var-run-resolv-conf\") pod \"assisted-installer-controller-bqsgz\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.139659 master-0 kubenswrapper[4143]: I0313 12:36:00.139605 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:00.139659 master-0 kubenswrapper[4143]: I0313 12:36:00.139626 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:00.140039 master-0 kubenswrapper[4143]: I0313 12:36:00.140009 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:00.140087 master-0 kubenswrapper[4143]: I0313 12:36:00.139632 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-resolv-conf\") pod \"assisted-installer-controller-bqsgz\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.140087 master-0 kubenswrapper[4143]: I0313 12:36:00.140068 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:00.140189 master-0 kubenswrapper[4143]: I0313 12:36:00.140078 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f39d7f76-0075-44c3-9101-eb2607cb176a-kube-api-access\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:00.140189 master-0 kubenswrapper[4143]: I0313 12:36:00.140165 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4dd0fc2f-f2ee-4447-a747-04a178288cf0-metrics-tls\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:36:00.140262 master-0 kubenswrapper[4143]: I0313 12:36:00.140213 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-sno-bootstrap-files\") pod \"assisted-installer-controller-bqsgz\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.140262 master-0 kubenswrapper[4143]: I0313 12:36:00.140256 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-522bl\" (UniqueName: \"kubernetes.io/projected/72ba330e-35ca-4d05-8641-a880bf30c0e7-kube-api-access-522bl\") pod \"assisted-installer-controller-bqsgz\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.140505 master-0 kubenswrapper[4143]: I0313 12:36:00.140478 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-sno-bootstrap-files\") pod \"assisted-installer-controller-bqsgz\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.142753 master-0 kubenswrapper[4143]: I0313 12:36:00.140574 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:00.142753 master-0 kubenswrapper[4143]: I0313 12:36:00.140622 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f39d7f76-0075-44c3-9101-eb2607cb176a-service-ca\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:00.142753 master-0 kubenswrapper[4143]: E0313 12:36:00.140827 4143 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:36:00.142753 master-0 kubenswrapper[4143]: I0313 12:36:00.141569 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f39d7f76-0075-44c3-9101-eb2607cb176a-service-ca\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:00.142753 master-0 kubenswrapper[4143]: I0313 12:36:00.141562 4143 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 13 12:36:00.142753 master-0 kubenswrapper[4143]: E0313 12:36:00.141790 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert podName:f39d7f76-0075-44c3-9101-eb2607cb176a nodeName:}" failed. No retries permitted until 2026-03-13 12:36:00.64112222 +0000 UTC m=+66.388266544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert") pod "cluster-version-operator-745944c6b7-mbjxt" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:36:00.148043 master-0 kubenswrapper[4143]: I0313 12:36:00.147685 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4dd0fc2f-f2ee-4447-a747-04a178288cf0-metrics-tls\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:36:00.155885 master-0 kubenswrapper[4143]: I0313 12:36:00.155838 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f39d7f76-0075-44c3-9101-eb2607cb176a-kube-api-access\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:00.161157 master-0 kubenswrapper[4143]: I0313 12:36:00.161115 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-522bl\" (UniqueName: \"kubernetes.io/projected/72ba330e-35ca-4d05-8641-a880bf30c0e7-kube-api-access-522bl\") pod \"assisted-installer-controller-bqsgz\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.161261 master-0 kubenswrapper[4143]: I0313 12:36:00.161241 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnw9d\" (UniqueName: \"kubernetes.io/projected/4dd0fc2f-f2ee-4447-a747-04a178288cf0-kube-api-access-fnw9d\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:36:00.253432 master-0 kubenswrapper[4143]: I0313 12:36:00.253269 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:36:00.281232 master-0 kubenswrapper[4143]: I0313 12:36:00.281003 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:00.298723 master-0 kubenswrapper[4143]: W0313 12:36:00.298661 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72ba330e_35ca_4d05_8641_a880bf30c0e7.slice/crio-aab59fe84d74f1f2dbe3af4167877250fbae9e62f4ef0e21a64f79bf2216fbcc WatchSource:0}: Error finding container aab59fe84d74f1f2dbe3af4167877250fbae9e62f4ef0e21a64f79bf2216fbcc: Status 404 returned error can't find the container with id aab59fe84d74f1f2dbe3af4167877250fbae9e62f4ef0e21a64f79bf2216fbcc Mar 13 12:36:00.471453 master-0 kubenswrapper[4143]: I0313 12:36:00.471130 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-bqsgz" event={"ID":"72ba330e-35ca-4d05-8641-a880bf30c0e7","Type":"ContainerStarted","Data":"aab59fe84d74f1f2dbe3af4167877250fbae9e62f4ef0e21a64f79bf2216fbcc"} Mar 13 12:36:00.472824 master-0 kubenswrapper[4143]: I0313 12:36:00.472752 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" event={"ID":"4dd0fc2f-f2ee-4447-a747-04a178288cf0","Type":"ContainerStarted","Data":"534692e5957aae2c3d6d9152a87bd37d178574b231da74f33889bcb3869aae82"} Mar 13 12:36:00.644348 master-0 kubenswrapper[4143]: I0313 12:36:00.644294 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:00.644574 master-0 kubenswrapper[4143]: E0313 12:36:00.644421 4143 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:36:00.644574 master-0 kubenswrapper[4143]: E0313 12:36:00.644499 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert podName:f39d7f76-0075-44c3-9101-eb2607cb176a nodeName:}" failed. No retries permitted until 2026-03-13 12:36:01.644479814 +0000 UTC m=+67.391624138 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert") pod "cluster-version-operator-745944c6b7-mbjxt" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:36:01.653253 master-0 kubenswrapper[4143]: I0313 12:36:01.653024 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:01.653253 master-0 kubenswrapper[4143]: E0313 12:36:01.653208 4143 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:36:01.653253 master-0 kubenswrapper[4143]: E0313 12:36:01.653271 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert podName:f39d7f76-0075-44c3-9101-eb2607cb176a nodeName:}" failed. No retries permitted until 2026-03-13 12:36:03.653252996 +0000 UTC m=+69.400397330 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert") pod "cluster-version-operator-745944c6b7-mbjxt" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:36:03.668299 master-0 kubenswrapper[4143]: I0313 12:36:03.668246 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:03.668891 master-0 kubenswrapper[4143]: E0313 12:36:03.668419 4143 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:36:03.668891 master-0 kubenswrapper[4143]: E0313 12:36:03.668473 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert podName:f39d7f76-0075-44c3-9101-eb2607cb176a nodeName:}" failed. No retries permitted until 2026-03-13 12:36:07.668452728 +0000 UTC m=+73.415597052 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert") pod "cluster-version-operator-745944c6b7-mbjxt" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:36:04.484022 master-0 kubenswrapper[4143]: I0313 12:36:04.483920 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" event={"ID":"4dd0fc2f-f2ee-4447-a747-04a178288cf0","Type":"ContainerStarted","Data":"638f7edbf4d5a7bd9c1277ff74b0deabee140db71794ce849e8ed2fe8e2bdb95"} Mar 13 12:36:04.509350 master-0 kubenswrapper[4143]: I0313 12:36:04.509268 4143 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" podStartSLOduration=30.655512137 podStartE2EDuration="34.509238273s" podCreationTimestamp="2026-03-13 12:35:30 +0000 UTC" firstStartedPulling="2026-03-13 12:36:00.266907735 +0000 UTC m=+66.014052059" lastFinishedPulling="2026-03-13 12:36:04.120633871 +0000 UTC m=+69.867778195" observedRunningTime="2026-03-13 12:36:04.509174421 +0000 UTC m=+70.256318765" watchObservedRunningTime="2026-03-13 12:36:04.509238273 +0000 UTC m=+70.256382597" Mar 13 12:36:06.491157 master-0 kubenswrapper[4143]: I0313 12:36:06.491093 4143 generic.go:334] "Generic (PLEG): container finished" podID="72ba330e-35ca-4d05-8641-a880bf30c0e7" containerID="1af7a53388bbd243cf9640d283230185be1782a2bdb43e5850dd6d341044a303" exitCode=0 Mar 13 12:36:06.491806 master-0 kubenswrapper[4143]: I0313 12:36:06.491174 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-bqsgz" event={"ID":"72ba330e-35ca-4d05-8641-a880bf30c0e7","Type":"ContainerDied","Data":"1af7a53388bbd243cf9640d283230185be1782a2bdb43e5850dd6d341044a303"} Mar 13 12:36:06.730988 master-0 kubenswrapper[4143]: I0313 12:36:06.730864 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-5tvzj"] Mar 13 12:36:06.731296 master-0 kubenswrapper[4143]: I0313 12:36:06.731256 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-5tvzj" Mar 13 12:36:06.799354 master-0 kubenswrapper[4143]: I0313 12:36:06.799289 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lqnv\" (UniqueName: \"kubernetes.io/projected/3dca6a91-7c31-44d2-89eb-c2c5f941e983-kube-api-access-5lqnv\") pod \"mtu-prober-5tvzj\" (UID: \"3dca6a91-7c31-44d2-89eb-c2c5f941e983\") " pod="openshift-network-operator/mtu-prober-5tvzj" Mar 13 12:36:06.899754 master-0 kubenswrapper[4143]: I0313 12:36:06.899668 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lqnv\" (UniqueName: \"kubernetes.io/projected/3dca6a91-7c31-44d2-89eb-c2c5f941e983-kube-api-access-5lqnv\") pod \"mtu-prober-5tvzj\" (UID: \"3dca6a91-7c31-44d2-89eb-c2c5f941e983\") " pod="openshift-network-operator/mtu-prober-5tvzj" Mar 13 12:36:06.916568 master-0 kubenswrapper[4143]: I0313 12:36:06.916503 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lqnv\" (UniqueName: \"kubernetes.io/projected/3dca6a91-7c31-44d2-89eb-c2c5f941e983-kube-api-access-5lqnv\") pod \"mtu-prober-5tvzj\" (UID: \"3dca6a91-7c31-44d2-89eb-c2c5f941e983\") " pod="openshift-network-operator/mtu-prober-5tvzj" Mar 13 12:36:07.047461 master-0 kubenswrapper[4143]: I0313 12:36:07.047393 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-5tvzj" Mar 13 12:36:07.059450 master-0 kubenswrapper[4143]: W0313 12:36:07.059403 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dca6a91_7c31_44d2_89eb_c2c5f941e983.slice/crio-03d9961b19ee86b8273aed8f3384d8fd0d0f86ad2c87207be32ea4592b6ddf9b WatchSource:0}: Error finding container 03d9961b19ee86b8273aed8f3384d8fd0d0f86ad2c87207be32ea4592b6ddf9b: Status 404 returned error can't find the container with id 03d9961b19ee86b8273aed8f3384d8fd0d0f86ad2c87207be32ea4592b6ddf9b Mar 13 12:36:07.463001 master-0 kubenswrapper[4143]: I0313 12:36:07.462956 4143 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 12:36:07.495972 master-0 kubenswrapper[4143]: I0313 12:36:07.495902 4143 generic.go:334] "Generic (PLEG): container finished" podID="3dca6a91-7c31-44d2-89eb-c2c5f941e983" containerID="3c4695e1552ba9205d33b8d7524c5a76469234a9b454c27b01c396a95436c2b9" exitCode=0 Mar 13 12:36:07.496628 master-0 kubenswrapper[4143]: I0313 12:36:07.496043 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-5tvzj" event={"ID":"3dca6a91-7c31-44d2-89eb-c2c5f941e983","Type":"ContainerDied","Data":"3c4695e1552ba9205d33b8d7524c5a76469234a9b454c27b01c396a95436c2b9"} Mar 13 12:36:07.496628 master-0 kubenswrapper[4143]: I0313 12:36:07.496112 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-5tvzj" event={"ID":"3dca6a91-7c31-44d2-89eb-c2c5f941e983","Type":"ContainerStarted","Data":"03d9961b19ee86b8273aed8f3384d8fd0d0f86ad2c87207be32ea4592b6ddf9b"} Mar 13 12:36:07.510518 master-0 kubenswrapper[4143]: I0313 12:36:07.510475 4143 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:07.605405 master-0 kubenswrapper[4143]: I0313 12:36:07.605256 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-var-run-resolv-conf\") pod \"72ba330e-35ca-4d05-8641-a880bf30c0e7\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " Mar 13 12:36:07.605405 master-0 kubenswrapper[4143]: I0313 12:36:07.605329 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-resolv-conf\") pod \"72ba330e-35ca-4d05-8641-a880bf30c0e7\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " Mar 13 12:36:07.605405 master-0 kubenswrapper[4143]: I0313 12:36:07.605361 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-ca-bundle\") pod \"72ba330e-35ca-4d05-8641-a880bf30c0e7\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " Mar 13 12:36:07.605850 master-0 kubenswrapper[4143]: I0313 12:36:07.605442 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "72ba330e-35ca-4d05-8641-a880bf30c0e7" (UID: "72ba330e-35ca-4d05-8641-a880bf30c0e7"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:07.605850 master-0 kubenswrapper[4143]: I0313 12:36:07.605459 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "72ba330e-35ca-4d05-8641-a880bf30c0e7" (UID: "72ba330e-35ca-4d05-8641-a880bf30c0e7"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:07.605850 master-0 kubenswrapper[4143]: I0313 12:36:07.605453 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "72ba330e-35ca-4d05-8641-a880bf30c0e7" (UID: "72ba330e-35ca-4d05-8641-a880bf30c0e7"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:07.605850 master-0 kubenswrapper[4143]: I0313 12:36:07.605566 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "72ba330e-35ca-4d05-8641-a880bf30c0e7" (UID: "72ba330e-35ca-4d05-8641-a880bf30c0e7"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:07.605850 master-0 kubenswrapper[4143]: I0313 12:36:07.605580 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-sno-bootstrap-files\") pod \"72ba330e-35ca-4d05-8641-a880bf30c0e7\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " Mar 13 12:36:07.605850 master-0 kubenswrapper[4143]: I0313 12:36:07.605658 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-522bl\" (UniqueName: \"kubernetes.io/projected/72ba330e-35ca-4d05-8641-a880bf30c0e7-kube-api-access-522bl\") pod \"72ba330e-35ca-4d05-8641-a880bf30c0e7\" (UID: \"72ba330e-35ca-4d05-8641-a880bf30c0e7\") " Mar 13 12:36:07.605850 master-0 kubenswrapper[4143]: I0313 12:36:07.605726 4143 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:07.605850 master-0 kubenswrapper[4143]: I0313 12:36:07.605739 4143 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:07.605850 master-0 kubenswrapper[4143]: I0313 12:36:07.605749 4143 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:07.605850 master-0 kubenswrapper[4143]: I0313 12:36:07.605761 4143 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/72ba330e-35ca-4d05-8641-a880bf30c0e7-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:07.608505 master-0 kubenswrapper[4143]: I0313 12:36:07.608406 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72ba330e-35ca-4d05-8641-a880bf30c0e7-kube-api-access-522bl" (OuterVolumeSpecName: "kube-api-access-522bl") pod "72ba330e-35ca-4d05-8641-a880bf30c0e7" (UID: "72ba330e-35ca-4d05-8641-a880bf30c0e7"). InnerVolumeSpecName "kube-api-access-522bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:36:07.706322 master-0 kubenswrapper[4143]: I0313 12:36:07.706247 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:07.706322 master-0 kubenswrapper[4143]: I0313 12:36:07.706305 4143 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-522bl\" (UniqueName: \"kubernetes.io/projected/72ba330e-35ca-4d05-8641-a880bf30c0e7-kube-api-access-522bl\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:07.706590 master-0 kubenswrapper[4143]: E0313 12:36:07.706411 4143 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:36:07.706590 master-0 kubenswrapper[4143]: E0313 12:36:07.706468 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert podName:f39d7f76-0075-44c3-9101-eb2607cb176a nodeName:}" failed. No retries permitted until 2026-03-13 12:36:15.706451851 +0000 UTC m=+81.453596175 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert") pod "cluster-version-operator-745944c6b7-mbjxt" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:36:08.501690 master-0 kubenswrapper[4143]: I0313 12:36:08.501579 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-bqsgz" event={"ID":"72ba330e-35ca-4d05-8641-a880bf30c0e7","Type":"ContainerDied","Data":"aab59fe84d74f1f2dbe3af4167877250fbae9e62f4ef0e21a64f79bf2216fbcc"} Mar 13 12:36:08.501690 master-0 kubenswrapper[4143]: I0313 12:36:08.501666 4143 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aab59fe84d74f1f2dbe3af4167877250fbae9e62f4ef0e21a64f79bf2216fbcc" Mar 13 12:36:08.501690 master-0 kubenswrapper[4143]: I0313 12:36:08.501664 4143 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:36:08.517521 master-0 kubenswrapper[4143]: I0313 12:36:08.517473 4143 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-5tvzj" Mar 13 12:36:08.613175 master-0 kubenswrapper[4143]: I0313 12:36:08.613085 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lqnv\" (UniqueName: \"kubernetes.io/projected/3dca6a91-7c31-44d2-89eb-c2c5f941e983-kube-api-access-5lqnv\") pod \"3dca6a91-7c31-44d2-89eb-c2c5f941e983\" (UID: \"3dca6a91-7c31-44d2-89eb-c2c5f941e983\") " Mar 13 12:36:08.616881 master-0 kubenswrapper[4143]: I0313 12:36:08.616810 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dca6a91-7c31-44d2-89eb-c2c5f941e983-kube-api-access-5lqnv" (OuterVolumeSpecName: "kube-api-access-5lqnv") pod "3dca6a91-7c31-44d2-89eb-c2c5f941e983" (UID: "3dca6a91-7c31-44d2-89eb-c2c5f941e983"). InnerVolumeSpecName "kube-api-access-5lqnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:36:08.714538 master-0 kubenswrapper[4143]: I0313 12:36:08.714113 4143 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lqnv\" (UniqueName: \"kubernetes.io/projected/3dca6a91-7c31-44d2-89eb-c2c5f941e983-kube-api-access-5lqnv\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:09.507704 master-0 kubenswrapper[4143]: I0313 12:36:09.507537 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-5tvzj" event={"ID":"3dca6a91-7c31-44d2-89eb-c2c5f941e983","Type":"ContainerDied","Data":"03d9961b19ee86b8273aed8f3384d8fd0d0f86ad2c87207be32ea4592b6ddf9b"} Mar 13 12:36:09.507704 master-0 kubenswrapper[4143]: I0313 12:36:09.507619 4143 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03d9961b19ee86b8273aed8f3384d8fd0d0f86ad2c87207be32ea4592b6ddf9b" Mar 13 12:36:09.507704 master-0 kubenswrapper[4143]: I0313 12:36:09.507666 4143 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-5tvzj" Mar 13 12:36:11.730952 master-0 kubenswrapper[4143]: I0313 12:36:11.730892 4143 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-5tvzj"] Mar 13 12:36:11.734354 master-0 kubenswrapper[4143]: I0313 12:36:11.734314 4143 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-5tvzj"] Mar 13 12:36:13.089498 master-0 kubenswrapper[4143]: I0313 12:36:13.089443 4143 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dca6a91-7c31-44d2-89eb-c2c5f941e983" path="/var/lib/kubelet/pods/3dca6a91-7c31-44d2-89eb-c2c5f941e983/volumes" Mar 13 12:36:15.768007 master-0 kubenswrapper[4143]: I0313 12:36:15.767904 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:15.768817 master-0 kubenswrapper[4143]: E0313 12:36:15.768044 4143 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:36:15.768817 master-0 kubenswrapper[4143]: E0313 12:36:15.768117 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert podName:f39d7f76-0075-44c3-9101-eb2607cb176a nodeName:}" failed. No retries permitted until 2026-03-13 12:36:31.768097526 +0000 UTC m=+97.515241850 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert") pod "cluster-version-operator-745944c6b7-mbjxt" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:36:16.629703 master-0 kubenswrapper[4143]: I0313 12:36:16.629631 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-bnn7n"] Mar 13 12:36:16.629703 master-0 kubenswrapper[4143]: E0313 12:36:16.629714 4143 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dca6a91-7c31-44d2-89eb-c2c5f941e983" containerName="prober" Mar 13 12:36:16.630023 master-0 kubenswrapper[4143]: I0313 12:36:16.629746 4143 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dca6a91-7c31-44d2-89eb-c2c5f941e983" containerName="prober" Mar 13 12:36:16.630023 master-0 kubenswrapper[4143]: E0313 12:36:16.629756 4143 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ba330e-35ca-4d05-8641-a880bf30c0e7" containerName="assisted-installer-controller" Mar 13 12:36:16.630023 master-0 kubenswrapper[4143]: I0313 12:36:16.629766 4143 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ba330e-35ca-4d05-8641-a880bf30c0e7" containerName="assisted-installer-controller" Mar 13 12:36:16.630023 master-0 kubenswrapper[4143]: I0313 12:36:16.629798 4143 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dca6a91-7c31-44d2-89eb-c2c5f941e983" containerName="prober" Mar 13 12:36:16.630023 master-0 kubenswrapper[4143]: I0313 12:36:16.629808 4143 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ba330e-35ca-4d05-8641-a880bf30c0e7" containerName="assisted-installer-controller" Mar 13 12:36:16.630023 master-0 kubenswrapper[4143]: I0313 12:36:16.630008 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.633103 master-0 kubenswrapper[4143]: I0313 12:36:16.633038 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 12:36:16.636330 master-0 kubenswrapper[4143]: I0313 12:36:16.636287 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 12:36:16.637018 master-0 kubenswrapper[4143]: I0313 12:36:16.636838 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 12:36:16.637018 master-0 kubenswrapper[4143]: I0313 12:36:16.637031 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 12:36:16.674379 master-0 kubenswrapper[4143]: I0313 12:36:16.674307 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.674379 master-0 kubenswrapper[4143]: I0313 12:36:16.674357 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-cnibin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.674379 master-0 kubenswrapper[4143]: I0313 12:36:16.674379 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-netns\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.674687 master-0 kubenswrapper[4143]: I0313 12:36:16.674462 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-kubelet\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.674687 master-0 kubenswrapper[4143]: I0313 12:36:16.674524 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-socket-dir-parent\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.674687 master-0 kubenswrapper[4143]: I0313 12:36:16.674553 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-bin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.674687 master-0 kubenswrapper[4143]: I0313 12:36:16.674575 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-multus-certs\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.674687 master-0 kubenswrapper[4143]: I0313 12:36:16.674631 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-system-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.674687 master-0 kubenswrapper[4143]: I0313 12:36:16.674654 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-hostroot\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.674905 master-0 kubenswrapper[4143]: I0313 12:36:16.674719 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-conf-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.674905 master-0 kubenswrapper[4143]: I0313 12:36:16.674783 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgbvr\" (UniqueName: \"kubernetes.io/projected/ce3a655a-0684-4bc5-ac36-5878507537c7-kube-api-access-vgbvr\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.674905 master-0 kubenswrapper[4143]: I0313 12:36:16.674828 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-k8s-cni-cncf-io\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.674905 master-0 kubenswrapper[4143]: I0313 12:36:16.674858 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-cni-binary-copy\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.675099 master-0 kubenswrapper[4143]: I0313 12:36:16.674930 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-multus\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.675099 master-0 kubenswrapper[4143]: I0313 12:36:16.675020 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-etc-kubernetes\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.675099 master-0 kubenswrapper[4143]: I0313 12:36:16.675057 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-daemon-config\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.675099 master-0 kubenswrapper[4143]: I0313 12:36:16.675093 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-os-release\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.776378 master-0 kubenswrapper[4143]: I0313 12:36:16.776268 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-etc-kubernetes\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.776378 master-0 kubenswrapper[4143]: I0313 12:36:16.776328 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-daemon-config\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.776378 master-0 kubenswrapper[4143]: I0313 12:36:16.776356 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-os-release\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.776378 master-0 kubenswrapper[4143]: I0313 12:36:16.776379 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.776378 master-0 kubenswrapper[4143]: I0313 12:36:16.776399 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-cnibin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.776378 master-0 kubenswrapper[4143]: I0313 12:36:16.776418 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-netns\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.776441 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-kubelet\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.776465 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-socket-dir-parent\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.776696 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.776896 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-os-release\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.776949 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-socket-dir-parent\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.777027 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-netns\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.777087 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-kubelet\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.777188 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-cnibin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.777233 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-etc-kubernetes\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.777275 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-bin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.777357 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-bin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.777410 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-multus-certs\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.777445 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-system-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.777474 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-conf-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.777497 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgbvr\" (UniqueName: \"kubernetes.io/projected/ce3a655a-0684-4bc5-ac36-5878507537c7-kube-api-access-vgbvr\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.777564 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-hostroot\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.777592 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-multus-certs\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.777802 master-0 kubenswrapper[4143]: I0313 12:36:16.777639 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-k8s-cni-cncf-io\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.779388 master-0 kubenswrapper[4143]: I0313 12:36:16.777682 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-multus\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.779388 master-0 kubenswrapper[4143]: I0313 12:36:16.777605 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-hostroot\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.779388 master-0 kubenswrapper[4143]: I0313 12:36:16.777764 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-multus\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.779388 master-0 kubenswrapper[4143]: I0313 12:36:16.777602 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-conf-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.779388 master-0 kubenswrapper[4143]: I0313 12:36:16.777718 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-cni-binary-copy\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.779388 master-0 kubenswrapper[4143]: I0313 12:36:16.777707 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-k8s-cni-cncf-io\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.779388 master-0 kubenswrapper[4143]: I0313 12:36:16.778263 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-system-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.779388 master-0 kubenswrapper[4143]: I0313 12:36:16.778878 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-daemon-config\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.779876 master-0 kubenswrapper[4143]: I0313 12:36:16.779431 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-cni-binary-copy\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.800242 master-0 kubenswrapper[4143]: I0313 12:36:16.800168 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgbvr\" (UniqueName: \"kubernetes.io/projected/ce3a655a-0684-4bc5-ac36-5878507537c7-kube-api-access-vgbvr\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.818558 master-0 kubenswrapper[4143]: I0313 12:36:16.818466 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-78p2k"] Mar 13 12:36:16.819174 master-0 kubenswrapper[4143]: I0313 12:36:16.819105 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.822099 master-0 kubenswrapper[4143]: I0313 12:36:16.822018 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 12:36:16.822314 master-0 kubenswrapper[4143]: I0313 12:36:16.822183 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 12:36:16.878379 master-0 kubenswrapper[4143]: I0313 12:36:16.878284 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-whereabouts-configmap\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.878379 master-0 kubenswrapper[4143]: I0313 12:36:16.878384 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km69t\" (UniqueName: \"kubernetes.io/projected/152689b1-5875-4a9a-bb25-bee858523168-kube-api-access-km69t\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.878705 master-0 kubenswrapper[4143]: I0313 12:36:16.878421 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-binary-copy\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.878705 master-0 kubenswrapper[4143]: I0313 12:36:16.878439 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-tuning-conf-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.878705 master-0 kubenswrapper[4143]: I0313 12:36:16.878455 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-system-cni-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.878705 master-0 kubenswrapper[4143]: I0313 12:36:16.878496 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.878705 master-0 kubenswrapper[4143]: I0313 12:36:16.878517 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-cnibin\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.878705 master-0 kubenswrapper[4143]: I0313 12:36:16.878562 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-os-release\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.948339 master-0 kubenswrapper[4143]: I0313 12:36:16.948231 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bnn7n" Mar 13 12:36:16.966762 master-0 kubenswrapper[4143]: W0313 12:36:16.966483 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce3a655a_0684_4bc5_ac36_5878507537c7.slice/crio-8f7395682c642b2e4f7ba2a9b79331d0b9afd8c7d7923a7bbdfc90aaeb45a6c2 WatchSource:0}: Error finding container 8f7395682c642b2e4f7ba2a9b79331d0b9afd8c7d7923a7bbdfc90aaeb45a6c2: Status 404 returned error can't find the container with id 8f7395682c642b2e4f7ba2a9b79331d0b9afd8c7d7923a7bbdfc90aaeb45a6c2 Mar 13 12:36:16.979386 master-0 kubenswrapper[4143]: I0313 12:36:16.979337 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km69t\" (UniqueName: \"kubernetes.io/projected/152689b1-5875-4a9a-bb25-bee858523168-kube-api-access-km69t\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.979612 master-0 kubenswrapper[4143]: I0313 12:36:16.979398 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-binary-copy\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.979676 master-0 kubenswrapper[4143]: I0313 12:36:16.979643 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-tuning-conf-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.979890 master-0 kubenswrapper[4143]: I0313 12:36:16.979855 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-system-cni-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.979986 master-0 kubenswrapper[4143]: I0313 12:36:16.979908 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.979986 master-0 kubenswrapper[4143]: I0313 12:36:16.979940 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-cnibin\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.979986 master-0 kubenswrapper[4143]: I0313 12:36:16.979961 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-os-release\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.980174 master-0 kubenswrapper[4143]: I0313 12:36:16.980001 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-tuning-conf-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.980174 master-0 kubenswrapper[4143]: I0313 12:36:16.980014 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-whereabouts-configmap\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.980582 master-0 kubenswrapper[4143]: I0313 12:36:16.980435 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-cnibin\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.980582 master-0 kubenswrapper[4143]: I0313 12:36:16.980444 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-system-cni-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.980582 master-0 kubenswrapper[4143]: I0313 12:36:16.980525 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-os-release\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.980826 master-0 kubenswrapper[4143]: I0313 12:36:16.980795 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-binary-copy\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.980936 master-0 kubenswrapper[4143]: I0313 12:36:16.980826 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-whereabouts-configmap\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:16.980936 master-0 kubenswrapper[4143]: I0313 12:36:16.980830 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:17.002547 master-0 kubenswrapper[4143]: I0313 12:36:17.002467 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km69t\" (UniqueName: \"kubernetes.io/projected/152689b1-5875-4a9a-bb25-bee858523168-kube-api-access-km69t\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:17.132621 master-0 kubenswrapper[4143]: I0313 12:36:17.132573 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:36:17.146745 master-0 kubenswrapper[4143]: W0313 12:36:17.146713 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod152689b1_5875_4a9a_bb25_bee858523168.slice/crio-2bd86a5a786b8cd9854f1e649c41cebb309a3c1ac190ae67ed40c19b3eec0d04 WatchSource:0}: Error finding container 2bd86a5a786b8cd9854f1e649c41cebb309a3c1ac190ae67ed40c19b3eec0d04: Status 404 returned error can't find the container with id 2bd86a5a786b8cd9854f1e649c41cebb309a3c1ac190ae67ed40c19b3eec0d04 Mar 13 12:36:17.530237 master-0 kubenswrapper[4143]: I0313 12:36:17.530120 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnn7n" event={"ID":"ce3a655a-0684-4bc5-ac36-5878507537c7","Type":"ContainerStarted","Data":"8f7395682c642b2e4f7ba2a9b79331d0b9afd8c7d7923a7bbdfc90aaeb45a6c2"} Mar 13 12:36:17.531932 master-0 kubenswrapper[4143]: I0313 12:36:17.531876 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78p2k" event={"ID":"152689b1-5875-4a9a-bb25-bee858523168","Type":"ContainerStarted","Data":"2bd86a5a786b8cd9854f1e649c41cebb309a3c1ac190ae67ed40c19b3eec0d04"} Mar 13 12:36:17.608080 master-0 kubenswrapper[4143]: I0313 12:36:17.607968 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-r9lmb"] Mar 13 12:36:17.608741 master-0 kubenswrapper[4143]: I0313 12:36:17.608703 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:17.608940 master-0 kubenswrapper[4143]: E0313 12:36:17.608811 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:17.687756 master-0 kubenswrapper[4143]: I0313 12:36:17.687661 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbtjs\" (UniqueName: \"kubernetes.io/projected/29b6aa89-0416-4595-9deb-10b290521d86-kube-api-access-cbtjs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:17.687756 master-0 kubenswrapper[4143]: I0313 12:36:17.687718 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:17.788858 master-0 kubenswrapper[4143]: I0313 12:36:17.788745 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:17.788858 master-0 kubenswrapper[4143]: I0313 12:36:17.788804 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbtjs\" (UniqueName: \"kubernetes.io/projected/29b6aa89-0416-4595-9deb-10b290521d86-kube-api-access-cbtjs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:17.789497 master-0 kubenswrapper[4143]: E0313 12:36:17.789117 4143 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:36:17.789497 master-0 kubenswrapper[4143]: E0313 12:36:17.789227 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs podName:29b6aa89-0416-4595-9deb-10b290521d86 nodeName:}" failed. No retries permitted until 2026-03-13 12:36:18.289207562 +0000 UTC m=+84.036351886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs") pod "network-metrics-daemon-r9lmb" (UID: "29b6aa89-0416-4595-9deb-10b290521d86") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:36:17.840806 master-0 kubenswrapper[4143]: I0313 12:36:17.840759 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbtjs\" (UniqueName: \"kubernetes.io/projected/29b6aa89-0416-4595-9deb-10b290521d86-kube-api-access-cbtjs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:18.293300 master-0 kubenswrapper[4143]: I0313 12:36:18.293240 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:18.293483 master-0 kubenswrapper[4143]: E0313 12:36:18.293447 4143 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:36:18.293562 master-0 kubenswrapper[4143]: E0313 12:36:18.293519 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs podName:29b6aa89-0416-4595-9deb-10b290521d86 nodeName:}" failed. No retries permitted until 2026-03-13 12:36:19.293500254 +0000 UTC m=+85.040644578 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs") pod "network-metrics-daemon-r9lmb" (UID: "29b6aa89-0416-4595-9deb-10b290521d86") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:36:19.302546 master-0 kubenswrapper[4143]: I0313 12:36:19.302492 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:19.303011 master-0 kubenswrapper[4143]: E0313 12:36:19.302866 4143 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:36:19.303081 master-0 kubenswrapper[4143]: E0313 12:36:19.303050 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs podName:29b6aa89-0416-4595-9deb-10b290521d86 nodeName:}" failed. No retries permitted until 2026-03-13 12:36:21.303021864 +0000 UTC m=+87.050166188 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs") pod "network-metrics-daemon-r9lmb" (UID: "29b6aa89-0416-4595-9deb-10b290521d86") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:36:20.082867 master-0 kubenswrapper[4143]: I0313 12:36:20.082122 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:20.082867 master-0 kubenswrapper[4143]: E0313 12:36:20.082329 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:20.540078 master-0 kubenswrapper[4143]: I0313 12:36:20.540002 4143 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="ae6f8708327259b51cf004983ebe879d244aef1bf9515e029c5674f436c5c187" exitCode=0 Mar 13 12:36:20.540078 master-0 kubenswrapper[4143]: I0313 12:36:20.540058 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78p2k" event={"ID":"152689b1-5875-4a9a-bb25-bee858523168","Type":"ContainerDied","Data":"ae6f8708327259b51cf004983ebe879d244aef1bf9515e029c5674f436c5c187"} Mar 13 12:36:21.097325 master-0 kubenswrapper[4143]: W0313 12:36:21.097227 4143 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 13 12:36:21.097657 master-0 kubenswrapper[4143]: I0313 12:36:21.097626 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 12:36:21.317350 master-0 kubenswrapper[4143]: I0313 12:36:21.317297 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:21.317639 master-0 kubenswrapper[4143]: E0313 12:36:21.317467 4143 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:36:21.317639 master-0 kubenswrapper[4143]: E0313 12:36:21.317527 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs podName:29b6aa89-0416-4595-9deb-10b290521d86 nodeName:}" failed. No retries permitted until 2026-03-13 12:36:25.317508827 +0000 UTC m=+91.064653151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs") pod "network-metrics-daemon-r9lmb" (UID: "29b6aa89-0416-4595-9deb-10b290521d86") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:36:22.086160 master-0 kubenswrapper[4143]: I0313 12:36:22.084418 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:22.086160 master-0 kubenswrapper[4143]: E0313 12:36:22.084564 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:24.081795 master-0 kubenswrapper[4143]: I0313 12:36:24.081732 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:24.082319 master-0 kubenswrapper[4143]: E0313 12:36:24.081909 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:25.198683 master-0 kubenswrapper[4143]: I0313 12:36:25.198616 4143 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=4.198583725 podStartE2EDuration="4.198583725s" podCreationTimestamp="2026-03-13 12:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:36:25.198541434 +0000 UTC m=+90.945685778" watchObservedRunningTime="2026-03-13 12:36:25.198583725 +0000 UTC m=+90.945728049" Mar 13 12:36:25.392966 master-0 kubenswrapper[4143]: I0313 12:36:25.392901 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:25.393311 master-0 kubenswrapper[4143]: E0313 12:36:25.393179 4143 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:36:25.393311 master-0 kubenswrapper[4143]: E0313 12:36:25.393304 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs podName:29b6aa89-0416-4595-9deb-10b290521d86 nodeName:}" failed. No retries permitted until 2026-03-13 12:36:33.393254793 +0000 UTC m=+99.140399117 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs") pod "network-metrics-daemon-r9lmb" (UID: "29b6aa89-0416-4595-9deb-10b290521d86") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:36:26.081780 master-0 kubenswrapper[4143]: I0313 12:36:26.081724 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:26.082087 master-0 kubenswrapper[4143]: E0313 12:36:26.081881 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:28.081912 master-0 kubenswrapper[4143]: I0313 12:36:28.081871 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:28.082574 master-0 kubenswrapper[4143]: E0313 12:36:28.081991 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:29.031558 master-0 kubenswrapper[4143]: I0313 12:36:29.031515 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5"] Mar 13 12:36:29.031989 master-0 kubenswrapper[4143]: I0313 12:36:29.031945 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:36:29.034102 master-0 kubenswrapper[4143]: I0313 12:36:29.033900 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 12:36:29.034102 master-0 kubenswrapper[4143]: I0313 12:36:29.034115 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 12:36:29.037949 master-0 kubenswrapper[4143]: I0313 12:36:29.034415 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 12:36:29.037949 master-0 kubenswrapper[4143]: I0313 12:36:29.034580 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 12:36:29.037949 master-0 kubenswrapper[4143]: I0313 12:36:29.035436 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 12:36:29.110062 master-0 kubenswrapper[4143]: I0313 12:36:29.110019 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 12:36:29.200171 master-0 kubenswrapper[4143]: I0313 12:36:29.200100 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:36:29.200171 master-0 kubenswrapper[4143]: I0313 12:36:29.200171 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlvjp\" (UniqueName: \"kubernetes.io/projected/5ae41cff-0949-47f8-aae9-ae133191476d-kube-api-access-mlvjp\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:36:29.200402 master-0 kubenswrapper[4143]: I0313 12:36:29.200231 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:36:29.200402 master-0 kubenswrapper[4143]: I0313 12:36:29.200284 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5ae41cff-0949-47f8-aae9-ae133191476d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:36:29.228856 master-0 kubenswrapper[4143]: I0313 12:36:29.228818 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fn8qb"] Mar 13 12:36:29.229565 master-0 kubenswrapper[4143]: I0313 12:36:29.229547 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.231587 master-0 kubenswrapper[4143]: I0313 12:36:29.231559 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 12:36:29.232982 master-0 kubenswrapper[4143]: I0313 12:36:29.232217 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 12:36:29.241870 master-0 kubenswrapper[4143]: I0313 12:36:29.241809 4143 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=0.241793293 podStartE2EDuration="241.793293ms" podCreationTimestamp="2026-03-13 12:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:36:29.241694832 +0000 UTC m=+94.988839166" watchObservedRunningTime="2026-03-13 12:36:29.241793293 +0000 UTC m=+94.988937617" Mar 13 12:36:29.301739 master-0 kubenswrapper[4143]: I0313 12:36:29.301603 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:36:29.301739 master-0 kubenswrapper[4143]: I0313 12:36:29.301664 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlvjp\" (UniqueName: \"kubernetes.io/projected/5ae41cff-0949-47f8-aae9-ae133191476d-kube-api-access-mlvjp\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:36:29.302132 master-0 kubenswrapper[4143]: I0313 12:36:29.302008 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:36:29.302132 master-0 kubenswrapper[4143]: I0313 12:36:29.302076 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5ae41cff-0949-47f8-aae9-ae133191476d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:36:29.302759 master-0 kubenswrapper[4143]: I0313 12:36:29.302349 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:36:29.302759 master-0 kubenswrapper[4143]: I0313 12:36:29.302726 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:36:29.305814 master-0 kubenswrapper[4143]: I0313 12:36:29.305790 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5ae41cff-0949-47f8-aae9-ae133191476d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:36:29.321796 master-0 kubenswrapper[4143]: I0313 12:36:29.321763 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlvjp\" (UniqueName: \"kubernetes.io/projected/5ae41cff-0949-47f8-aae9-ae133191476d-kube-api-access-mlvjp\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:36:29.345591 master-0 kubenswrapper[4143]: I0313 12:36:29.345539 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:36:29.417894 master-0 kubenswrapper[4143]: I0313 12:36:29.417839 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovn-node-metrics-cert\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.417894 master-0 kubenswrapper[4143]: I0313 12:36:29.417890 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-slash\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418139 master-0 kubenswrapper[4143]: I0313 12:36:29.417916 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-var-lib-openvswitch\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418139 master-0 kubenswrapper[4143]: I0313 12:36:29.417939 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-systemd\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418139 master-0 kubenswrapper[4143]: I0313 12:36:29.417959 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-node-log\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418139 master-0 kubenswrapper[4143]: I0313 12:36:29.417983 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovnkube-script-lib\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418139 master-0 kubenswrapper[4143]: I0313 12:36:29.418003 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-run-netns\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418139 master-0 kubenswrapper[4143]: I0313 12:36:29.418036 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-log-socket\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418139 master-0 kubenswrapper[4143]: I0313 12:36:29.418059 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-cni-bin\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418629 master-0 kubenswrapper[4143]: I0313 12:36:29.418596 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-run-ovn-kubernetes\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418768 master-0 kubenswrapper[4143]: I0313 12:36:29.418742 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-openvswitch\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418817 master-0 kubenswrapper[4143]: I0313 12:36:29.418781 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-systemd-units\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418817 master-0 kubenswrapper[4143]: I0313 12:36:29.418799 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-etc-openvswitch\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418867 master-0 kubenswrapper[4143]: I0313 12:36:29.418819 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418867 master-0 kubenswrapper[4143]: I0313 12:36:29.418852 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bz6x\" (UniqueName: \"kubernetes.io/projected/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-kube-api-access-2bz6x\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418928 master-0 kubenswrapper[4143]: I0313 12:36:29.418881 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-kubelet\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418928 master-0 kubenswrapper[4143]: I0313 12:36:29.418919 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovnkube-config\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.418981 master-0 kubenswrapper[4143]: I0313 12:36:29.418936 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-env-overrides\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.419008 master-0 kubenswrapper[4143]: I0313 12:36:29.418990 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-cni-netd\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.419037 master-0 kubenswrapper[4143]: I0313 12:36:29.419014 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-ovn\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.521730 master-0 kubenswrapper[4143]: I0313 12:36:29.521692 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovnkube-config\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.521730 master-0 kubenswrapper[4143]: I0313 12:36:29.521732 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-env-overrides\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.521990 master-0 kubenswrapper[4143]: I0313 12:36:29.521768 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-cni-netd\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.521990 master-0 kubenswrapper[4143]: I0313 12:36:29.521792 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-ovn\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.521990 master-0 kubenswrapper[4143]: I0313 12:36:29.521821 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovn-node-metrics-cert\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.521990 master-0 kubenswrapper[4143]: I0313 12:36:29.521880 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-slash\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.521990 master-0 kubenswrapper[4143]: I0313 12:36:29.521908 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-var-lib-openvswitch\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.521990 master-0 kubenswrapper[4143]: I0313 12:36:29.521942 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-systemd\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.521990 master-0 kubenswrapper[4143]: I0313 12:36:29.521968 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-node-log\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.521990 master-0 kubenswrapper[4143]: I0313 12:36:29.521991 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovnkube-script-lib\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.522334 master-0 kubenswrapper[4143]: I0313 12:36:29.522012 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-run-netns\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.522334 master-0 kubenswrapper[4143]: I0313 12:36:29.522034 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-log-socket\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.522334 master-0 kubenswrapper[4143]: I0313 12:36:29.522065 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-cni-bin\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.522334 master-0 kubenswrapper[4143]: I0313 12:36:29.522097 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-run-ovn-kubernetes\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.522334 master-0 kubenswrapper[4143]: I0313 12:36:29.522121 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-openvswitch\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.522334 master-0 kubenswrapper[4143]: I0313 12:36:29.522181 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-systemd-units\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.522334 master-0 kubenswrapper[4143]: I0313 12:36:29.522206 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-etc-openvswitch\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.522334 master-0 kubenswrapper[4143]: I0313 12:36:29.522238 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.522334 master-0 kubenswrapper[4143]: I0313 12:36:29.522265 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bz6x\" (UniqueName: \"kubernetes.io/projected/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-kube-api-access-2bz6x\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.522334 master-0 kubenswrapper[4143]: I0313 12:36:29.522287 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-kubelet\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.522688 master-0 kubenswrapper[4143]: I0313 12:36:29.522378 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-kubelet\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.523719 master-0 kubenswrapper[4143]: I0313 12:36:29.523078 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovnkube-config\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.523719 master-0 kubenswrapper[4143]: I0313 12:36:29.523133 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-run-netns\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.523719 master-0 kubenswrapper[4143]: I0313 12:36:29.523198 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-log-socket\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.523719 master-0 kubenswrapper[4143]: I0313 12:36:29.523231 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-cni-bin\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.523719 master-0 kubenswrapper[4143]: I0313 12:36:29.523262 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-run-ovn-kubernetes\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.523719 master-0 kubenswrapper[4143]: I0313 12:36:29.523296 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-openvswitch\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.523719 master-0 kubenswrapper[4143]: I0313 12:36:29.523326 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-systemd-units\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.523719 master-0 kubenswrapper[4143]: I0313 12:36:29.523356 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-etc-openvswitch\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.523719 master-0 kubenswrapper[4143]: I0313 12:36:29.523390 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.523719 master-0 kubenswrapper[4143]: I0313 12:36:29.523616 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovnkube-script-lib\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.523719 master-0 kubenswrapper[4143]: I0313 12:36:29.523687 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-var-lib-openvswitch\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.524131 master-0 kubenswrapper[4143]: I0313 12:36:29.523739 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-ovn\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.524131 master-0 kubenswrapper[4143]: I0313 12:36:29.523816 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-slash\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.524131 master-0 kubenswrapper[4143]: I0313 12:36:29.523830 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-systemd\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.524131 master-0 kubenswrapper[4143]: I0313 12:36:29.524066 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-cni-netd\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.524414 master-0 kubenswrapper[4143]: I0313 12:36:29.524332 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-env-overrides\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.524414 master-0 kubenswrapper[4143]: I0313 12:36:29.524375 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-node-log\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.530686 master-0 kubenswrapper[4143]: I0313 12:36:29.530633 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovn-node-metrics-cert\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.541814 master-0 kubenswrapper[4143]: I0313 12:36:29.541759 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bz6x\" (UniqueName: \"kubernetes.io/projected/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-kube-api-access-2bz6x\") pod \"ovnkube-node-fn8qb\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:29.841071 master-0 kubenswrapper[4143]: I0313 12:36:29.841022 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:30.083375 master-0 kubenswrapper[4143]: I0313 12:36:30.083330 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:30.083683 master-0 kubenswrapper[4143]: E0313 12:36:30.083631 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:30.094905 master-0 kubenswrapper[4143]: I0313 12:36:30.094798 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 12:36:30.899775 master-0 kubenswrapper[4143]: W0313 12:36:30.899729 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ae41cff_0949_47f8_aae9_ae133191476d.slice/crio-f4bdadfb01202ddc6464892800ff63c99a7021c118d9d6dada777648c97106ba WatchSource:0}: Error finding container f4bdadfb01202ddc6464892800ff63c99a7021c118d9d6dada777648c97106ba: Status 404 returned error can't find the container with id f4bdadfb01202ddc6464892800ff63c99a7021c118d9d6dada777648c97106ba Mar 13 12:36:31.570522 master-0 kubenswrapper[4143]: I0313 12:36:31.570467 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" event={"ID":"5ae41cff-0949-47f8-aae9-ae133191476d","Type":"ContainerStarted","Data":"83c9fffcd603bad026f3d48e0bef33373a522d17a699cbdc527902684424676e"} Mar 13 12:36:31.570522 master-0 kubenswrapper[4143]: I0313 12:36:31.570508 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" event={"ID":"5ae41cff-0949-47f8-aae9-ae133191476d","Type":"ContainerStarted","Data":"f4bdadfb01202ddc6464892800ff63c99a7021c118d9d6dada777648c97106ba"} Mar 13 12:36:31.571230 master-0 kubenswrapper[4143]: I0313 12:36:31.571203 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerStarted","Data":"7bb07b8ac3a9143900e44c8646ee6fb8d832847a79c050ce5b93154ab39c7aad"} Mar 13 12:36:31.572288 master-0 kubenswrapper[4143]: I0313 12:36:31.572258 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnn7n" event={"ID":"ce3a655a-0684-4bc5-ac36-5878507537c7","Type":"ContainerStarted","Data":"45b310d7cacbe967fc9fde12bb79f74c7adc5ee322eb24c50e127fe93ab37cc3"} Mar 13 12:36:31.583193 master-0 kubenswrapper[4143]: I0313 12:36:31.583136 4143 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="134471a7b38bb354ac04a0f22e311d7bea5264435a237eafabc1ded333b762d2" exitCode=0 Mar 13 12:36:31.583376 master-0 kubenswrapper[4143]: I0313 12:36:31.583201 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78p2k" event={"ID":"152689b1-5875-4a9a-bb25-bee858523168","Type":"ContainerDied","Data":"134471a7b38bb354ac04a0f22e311d7bea5264435a237eafabc1ded333b762d2"} Mar 13 12:36:31.604229 master-0 kubenswrapper[4143]: I0313 12:36:31.604121 4143 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=1.604100242 podStartE2EDuration="1.604100242s" podCreationTimestamp="2026-03-13 12:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:36:31.588805971 +0000 UTC m=+97.335950295" watchObservedRunningTime="2026-03-13 12:36:31.604100242 +0000 UTC m=+97.351244566" Mar 13 12:36:31.628240 master-0 kubenswrapper[4143]: I0313 12:36:31.626736 4143 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bnn7n" podStartSLOduration=1.624198411 podStartE2EDuration="15.626719049s" podCreationTimestamp="2026-03-13 12:36:16 +0000 UTC" firstStartedPulling="2026-03-13 12:36:16.969167071 +0000 UTC m=+82.716311405" lastFinishedPulling="2026-03-13 12:36:30.971687719 +0000 UTC m=+96.718832043" observedRunningTime="2026-03-13 12:36:31.603794187 +0000 UTC m=+97.350938521" watchObservedRunningTime="2026-03-13 12:36:31.626719049 +0000 UTC m=+97.373863373" Mar 13 12:36:31.845704 master-0 kubenswrapper[4143]: I0313 12:36:31.845490 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:36:31.845704 master-0 kubenswrapper[4143]: E0313 12:36:31.845675 4143 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:36:31.845928 master-0 kubenswrapper[4143]: E0313 12:36:31.845758 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert podName:f39d7f76-0075-44c3-9101-eb2607cb176a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:03.845738743 +0000 UTC m=+129.592883067 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert") pod "cluster-version-operator-745944c6b7-mbjxt" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:36:32.086881 master-0 kubenswrapper[4143]: I0313 12:36:32.086819 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:32.087716 master-0 kubenswrapper[4143]: E0313 12:36:32.087291 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:32.200437 master-0 kubenswrapper[4143]: I0313 12:36:32.200281 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-pnwsc"] Mar 13 12:36:32.200617 master-0 kubenswrapper[4143]: I0313 12:36:32.200582 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:32.200675 master-0 kubenswrapper[4143]: E0313 12:36:32.200645 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:36:32.254645 master-0 kubenswrapper[4143]: I0313 12:36:32.254578 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btf8q\" (UniqueName: \"kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q\") pod \"network-check-target-pnwsc\" (UID: \"269aedfd-4274-4998-bd0d-603b67257666\") " pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:32.355727 master-0 kubenswrapper[4143]: I0313 12:36:32.355674 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btf8q\" (UniqueName: \"kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q\") pod \"network-check-target-pnwsc\" (UID: \"269aedfd-4274-4998-bd0d-603b67257666\") " pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:32.370922 master-0 kubenswrapper[4143]: E0313 12:36:32.370885 4143 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 12:36:32.370922 master-0 kubenswrapper[4143]: E0313 12:36:32.370914 4143 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 12:36:32.370922 master-0 kubenswrapper[4143]: E0313 12:36:32.370928 4143 projected.go:194] Error preparing data for projected volume kube-api-access-btf8q for pod openshift-network-diagnostics/network-check-target-pnwsc: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:36:32.371171 master-0 kubenswrapper[4143]: E0313 12:36:32.370999 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q podName:269aedfd-4274-4998-bd0d-603b67257666 nodeName:}" failed. No retries permitted until 2026-03-13 12:36:32.870981364 +0000 UTC m=+98.618125688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-btf8q" (UniqueName: "kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q") pod "network-check-target-pnwsc" (UID: "269aedfd-4274-4998-bd0d-603b67257666") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:36:32.960729 master-0 kubenswrapper[4143]: I0313 12:36:32.960681 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btf8q\" (UniqueName: \"kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q\") pod \"network-check-target-pnwsc\" (UID: \"269aedfd-4274-4998-bd0d-603b67257666\") " pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:32.960911 master-0 kubenswrapper[4143]: E0313 12:36:32.960821 4143 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 12:36:32.960911 master-0 kubenswrapper[4143]: E0313 12:36:32.960844 4143 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 12:36:32.960911 master-0 kubenswrapper[4143]: E0313 12:36:32.960854 4143 projected.go:194] Error preparing data for projected volume kube-api-access-btf8q for pod openshift-network-diagnostics/network-check-target-pnwsc: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:36:32.960911 master-0 kubenswrapper[4143]: E0313 12:36:32.960909 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q podName:269aedfd-4274-4998-bd0d-603b67257666 nodeName:}" failed. No retries permitted until 2026-03-13 12:36:33.96088956 +0000 UTC m=+99.708033884 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-btf8q" (UniqueName: "kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q") pod "network-check-target-pnwsc" (UID: "269aedfd-4274-4998-bd0d-603b67257666") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:36:33.464164 master-0 kubenswrapper[4143]: I0313 12:36:33.464089 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:33.464731 master-0 kubenswrapper[4143]: E0313 12:36:33.464325 4143 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:36:33.464731 master-0 kubenswrapper[4143]: E0313 12:36:33.464426 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs podName:29b6aa89-0416-4595-9deb-10b290521d86 nodeName:}" failed. No retries permitted until 2026-03-13 12:36:49.464402899 +0000 UTC m=+115.211547293 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs") pod "network-metrics-daemon-r9lmb" (UID: "29b6aa89-0416-4595-9deb-10b290521d86") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:36:33.968081 master-0 kubenswrapper[4143]: I0313 12:36:33.967940 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btf8q\" (UniqueName: \"kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q\") pod \"network-check-target-pnwsc\" (UID: \"269aedfd-4274-4998-bd0d-603b67257666\") " pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:33.968335 master-0 kubenswrapper[4143]: E0313 12:36:33.968140 4143 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 12:36:33.968335 master-0 kubenswrapper[4143]: E0313 12:36:33.968174 4143 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 12:36:33.968335 master-0 kubenswrapper[4143]: E0313 12:36:33.968185 4143 projected.go:194] Error preparing data for projected volume kube-api-access-btf8q for pod openshift-network-diagnostics/network-check-target-pnwsc: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:36:33.968335 master-0 kubenswrapper[4143]: E0313 12:36:33.968270 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q podName:269aedfd-4274-4998-bd0d-603b67257666 nodeName:}" failed. No retries permitted until 2026-03-13 12:36:35.968224742 +0000 UTC m=+101.715369066 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-btf8q" (UniqueName: "kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q") pod "network-check-target-pnwsc" (UID: "269aedfd-4274-4998-bd0d-603b67257666") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:36:34.081856 master-0 kubenswrapper[4143]: I0313 12:36:34.081762 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:34.082071 master-0 kubenswrapper[4143]: E0313 12:36:34.081906 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:34.082071 master-0 kubenswrapper[4143]: I0313 12:36:34.082005 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:34.082259 master-0 kubenswrapper[4143]: E0313 12:36:34.082231 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:36:34.594851 master-0 kubenswrapper[4143]: I0313 12:36:34.594714 4143 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="ec83ba0b787947b6a285aac754b05fb294210ab326a2dc10a91b47f74ad8a542" exitCode=0 Mar 13 12:36:34.594851 master-0 kubenswrapper[4143]: I0313 12:36:34.594759 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78p2k" event={"ID":"152689b1-5875-4a9a-bb25-bee858523168","Type":"ContainerDied","Data":"ec83ba0b787947b6a285aac754b05fb294210ab326a2dc10a91b47f74ad8a542"} Mar 13 12:36:34.822213 master-0 kubenswrapper[4143]: I0313 12:36:34.819126 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-qg8q5"] Mar 13 12:36:34.822213 master-0 kubenswrapper[4143]: I0313 12:36:34.820700 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:36:34.824190 master-0 kubenswrapper[4143]: I0313 12:36:34.824099 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 12:36:34.824190 master-0 kubenswrapper[4143]: I0313 12:36:34.824140 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 12:36:34.824339 master-0 kubenswrapper[4143]: I0313 12:36:34.824259 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 12:36:34.824339 master-0 kubenswrapper[4143]: I0313 12:36:34.824108 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 12:36:34.825687 master-0 kubenswrapper[4143]: I0313 12:36:34.825653 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 12:36:34.878712 master-0 kubenswrapper[4143]: I0313 12:36:34.878599 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-webhook-cert\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:36:34.878712 master-0 kubenswrapper[4143]: I0313 12:36:34.878653 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brzd4\" (UniqueName: \"kubernetes.io/projected/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-kube-api-access-brzd4\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:36:34.878712 master-0 kubenswrapper[4143]: I0313 12:36:34.878713 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-env-overrides\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:36:34.878939 master-0 kubenswrapper[4143]: I0313 12:36:34.878745 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-ovnkube-identity-cm\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:36:34.979476 master-0 kubenswrapper[4143]: I0313 12:36:34.979426 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-env-overrides\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:36:34.979476 master-0 kubenswrapper[4143]: I0313 12:36:34.979482 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-ovnkube-identity-cm\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:36:34.979703 master-0 kubenswrapper[4143]: I0313 12:36:34.979515 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-webhook-cert\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:36:34.979703 master-0 kubenswrapper[4143]: I0313 12:36:34.979538 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brzd4\" (UniqueName: \"kubernetes.io/projected/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-kube-api-access-brzd4\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:36:34.980432 master-0 kubenswrapper[4143]: I0313 12:36:34.980412 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-env-overrides\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:36:34.981217 master-0 kubenswrapper[4143]: I0313 12:36:34.981105 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-ovnkube-identity-cm\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:36:34.981329 master-0 kubenswrapper[4143]: E0313 12:36:34.981229 4143 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Mar 13 12:36:34.981329 master-0 kubenswrapper[4143]: E0313 12:36:34.981274 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-webhook-cert podName:1f43b4e7-5cd1-46d2-a02e-0d846b2e5182 nodeName:}" failed. No retries permitted until 2026-03-13 12:36:35.481258633 +0000 UTC m=+101.228402957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-webhook-cert") pod "network-node-identity-qg8q5" (UID: "1f43b4e7-5cd1-46d2-a02e-0d846b2e5182") : secret "network-node-identity-cert" not found Mar 13 12:36:35.269086 master-0 kubenswrapper[4143]: I0313 12:36:35.268795 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brzd4\" (UniqueName: \"kubernetes.io/projected/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-kube-api-access-brzd4\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:36:35.484248 master-0 kubenswrapper[4143]: I0313 12:36:35.484183 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-webhook-cert\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:36:35.488267 master-0 kubenswrapper[4143]: I0313 12:36:35.488224 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-webhook-cert\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:36:35.732771 master-0 kubenswrapper[4143]: I0313 12:36:35.732736 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:36:35.744823 master-0 kubenswrapper[4143]: W0313 12:36:35.744787 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f43b4e7_5cd1_46d2_a02e_0d846b2e5182.slice/crio-e2d9f98170b9be57120af2a3d4ad3e87888e64c3d58e7180a2211b7ab3fd61c6 WatchSource:0}: Error finding container e2d9f98170b9be57120af2a3d4ad3e87888e64c3d58e7180a2211b7ab3fd61c6: Status 404 returned error can't find the container with id e2d9f98170b9be57120af2a3d4ad3e87888e64c3d58e7180a2211b7ab3fd61c6 Mar 13 12:36:35.989743 master-0 kubenswrapper[4143]: I0313 12:36:35.989623 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btf8q\" (UniqueName: \"kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q\") pod \"network-check-target-pnwsc\" (UID: \"269aedfd-4274-4998-bd0d-603b67257666\") " pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:35.989925 master-0 kubenswrapper[4143]: E0313 12:36:35.989792 4143 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 12:36:35.989925 master-0 kubenswrapper[4143]: E0313 12:36:35.989812 4143 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 12:36:35.989925 master-0 kubenswrapper[4143]: E0313 12:36:35.989825 4143 projected.go:194] Error preparing data for projected volume kube-api-access-btf8q for pod openshift-network-diagnostics/network-check-target-pnwsc: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:36:35.989925 master-0 kubenswrapper[4143]: E0313 12:36:35.989877 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q podName:269aedfd-4274-4998-bd0d-603b67257666 nodeName:}" failed. No retries permitted until 2026-03-13 12:36:39.989858847 +0000 UTC m=+105.737003171 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-btf8q" (UniqueName: "kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q") pod "network-check-target-pnwsc" (UID: "269aedfd-4274-4998-bd0d-603b67257666") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:36:36.082295 master-0 kubenswrapper[4143]: I0313 12:36:36.082246 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:36.082473 master-0 kubenswrapper[4143]: E0313 12:36:36.082336 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:36.082473 master-0 kubenswrapper[4143]: I0313 12:36:36.082254 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:36.082473 master-0 kubenswrapper[4143]: E0313 12:36:36.082393 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:36:36.602258 master-0 kubenswrapper[4143]: I0313 12:36:36.602125 4143 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="7c57d841a99a5e2cd1a42f48f3248a346104a0d155b92d640bd1a07ffd81b262" exitCode=0 Mar 13 12:36:36.602258 master-0 kubenswrapper[4143]: I0313 12:36:36.602199 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78p2k" event={"ID":"152689b1-5875-4a9a-bb25-bee858523168","Type":"ContainerDied","Data":"7c57d841a99a5e2cd1a42f48f3248a346104a0d155b92d640bd1a07ffd81b262"} Mar 13 12:36:36.604944 master-0 kubenswrapper[4143]: I0313 12:36:36.604903 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-qg8q5" event={"ID":"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182","Type":"ContainerStarted","Data":"e2d9f98170b9be57120af2a3d4ad3e87888e64c3d58e7180a2211b7ab3fd61c6"} Mar 13 12:36:38.082024 master-0 kubenswrapper[4143]: I0313 12:36:38.081908 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:38.082024 master-0 kubenswrapper[4143]: I0313 12:36:38.081937 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:38.082731 master-0 kubenswrapper[4143]: E0313 12:36:38.082052 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:36:38.082731 master-0 kubenswrapper[4143]: E0313 12:36:38.082199 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:40.022179 master-0 kubenswrapper[4143]: I0313 12:36:40.021506 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btf8q\" (UniqueName: \"kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q\") pod \"network-check-target-pnwsc\" (UID: \"269aedfd-4274-4998-bd0d-603b67257666\") " pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:40.022179 master-0 kubenswrapper[4143]: E0313 12:36:40.021693 4143 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 12:36:40.022179 master-0 kubenswrapper[4143]: E0313 12:36:40.021722 4143 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 12:36:40.022179 master-0 kubenswrapper[4143]: E0313 12:36:40.021741 4143 projected.go:194] Error preparing data for projected volume kube-api-access-btf8q for pod openshift-network-diagnostics/network-check-target-pnwsc: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:36:40.022179 master-0 kubenswrapper[4143]: E0313 12:36:40.021795 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q podName:269aedfd-4274-4998-bd0d-603b67257666 nodeName:}" failed. No retries permitted until 2026-03-13 12:36:48.021777453 +0000 UTC m=+113.768921787 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-btf8q" (UniqueName: "kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q") pod "network-check-target-pnwsc" (UID: "269aedfd-4274-4998-bd0d-603b67257666") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:36:40.082227 master-0 kubenswrapper[4143]: I0313 12:36:40.081636 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:40.082227 master-0 kubenswrapper[4143]: E0313 12:36:40.081757 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:40.082227 master-0 kubenswrapper[4143]: I0313 12:36:40.082115 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:40.082227 master-0 kubenswrapper[4143]: E0313 12:36:40.082194 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:36:42.081444 master-0 kubenswrapper[4143]: I0313 12:36:42.081372 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:42.081444 master-0 kubenswrapper[4143]: I0313 12:36:42.081418 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:42.081899 master-0 kubenswrapper[4143]: E0313 12:36:42.081513 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:42.081899 master-0 kubenswrapper[4143]: E0313 12:36:42.081641 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:36:44.082221 master-0 kubenswrapper[4143]: I0313 12:36:44.082183 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:44.082221 master-0 kubenswrapper[4143]: I0313 12:36:44.082183 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:44.082742 master-0 kubenswrapper[4143]: E0313 12:36:44.082363 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:44.082742 master-0 kubenswrapper[4143]: E0313 12:36:44.082487 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:36:45.839737 master-0 kubenswrapper[4143]: I0313 12:36:45.839692 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 12:36:46.081641 master-0 kubenswrapper[4143]: I0313 12:36:46.081594 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:46.081856 master-0 kubenswrapper[4143]: I0313 12:36:46.081657 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:46.081856 master-0 kubenswrapper[4143]: E0313 12:36:46.081718 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:46.081856 master-0 kubenswrapper[4143]: E0313 12:36:46.081765 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:36:48.081998 master-0 kubenswrapper[4143]: I0313 12:36:48.081797 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:48.081998 master-0 kubenswrapper[4143]: I0313 12:36:48.081810 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:48.081998 master-0 kubenswrapper[4143]: E0313 12:36:48.081917 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:36:48.082564 master-0 kubenswrapper[4143]: E0313 12:36:48.082074 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:48.090075 master-0 kubenswrapper[4143]: I0313 12:36:48.090044 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btf8q\" (UniqueName: \"kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q\") pod \"network-check-target-pnwsc\" (UID: \"269aedfd-4274-4998-bd0d-603b67257666\") " pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:48.090271 master-0 kubenswrapper[4143]: E0313 12:36:48.090246 4143 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 12:36:48.090271 master-0 kubenswrapper[4143]: E0313 12:36:48.090271 4143 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 12:36:48.090358 master-0 kubenswrapper[4143]: E0313 12:36:48.090282 4143 projected.go:194] Error preparing data for projected volume kube-api-access-btf8q for pod openshift-network-diagnostics/network-check-target-pnwsc: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:36:48.090358 master-0 kubenswrapper[4143]: E0313 12:36:48.090350 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q podName:269aedfd-4274-4998-bd0d-603b67257666 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:04.090320695 +0000 UTC m=+129.837465019 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-btf8q" (UniqueName: "kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q") pod "network-check-target-pnwsc" (UID: "269aedfd-4274-4998-bd0d-603b67257666") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:36:49.503796 master-0 kubenswrapper[4143]: I0313 12:36:49.503747 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:49.504297 master-0 kubenswrapper[4143]: E0313 12:36:49.503939 4143 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:36:49.504297 master-0 kubenswrapper[4143]: E0313 12:36:49.504008 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs podName:29b6aa89-0416-4595-9deb-10b290521d86 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:21.503987304 +0000 UTC m=+147.251131628 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs") pod "network-metrics-daemon-r9lmb" (UID: "29b6aa89-0416-4595-9deb-10b290521d86") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:36:50.082038 master-0 kubenswrapper[4143]: I0313 12:36:50.081992 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:50.082252 master-0 kubenswrapper[4143]: I0313 12:36:50.081992 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:50.082252 master-0 kubenswrapper[4143]: E0313 12:36:50.082116 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:50.082252 master-0 kubenswrapper[4143]: E0313 12:36:50.082192 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:36:51.681975 master-0 kubenswrapper[4143]: I0313 12:36:51.681821 4143 generic.go:334] "Generic (PLEG): container finished" podID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerID="5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9" exitCode=0 Mar 13 12:36:51.681975 master-0 kubenswrapper[4143]: I0313 12:36:51.681882 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerDied","Data":"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9"} Mar 13 12:36:51.684607 master-0 kubenswrapper[4143]: I0313 12:36:51.684566 4143 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="e1467141e26d577aa41ff200895deb27986a626bccdf77e649db90ad9f882528" exitCode=0 Mar 13 12:36:51.684607 master-0 kubenswrapper[4143]: I0313 12:36:51.684599 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78p2k" event={"ID":"152689b1-5875-4a9a-bb25-bee858523168","Type":"ContainerDied","Data":"e1467141e26d577aa41ff200895deb27986a626bccdf77e649db90ad9f882528"} Mar 13 12:36:51.686049 master-0 kubenswrapper[4143]: I0313 12:36:51.686001 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-qg8q5" event={"ID":"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182","Type":"ContainerStarted","Data":"8c3d9fdbcfd0987b6eb3f7869d1d1d034470ad27e956a473bf9fb468daecb5e8"} Mar 13 12:36:51.686049 master-0 kubenswrapper[4143]: I0313 12:36:51.686029 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-qg8q5" event={"ID":"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182","Type":"ContainerStarted","Data":"3517c3af357130acb419ebab94b4810d07459b4c08d3eb4cac75ac8012cf32fb"} Mar 13 12:36:51.687942 master-0 kubenswrapper[4143]: I0313 12:36:51.687814 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" event={"ID":"5ae41cff-0949-47f8-aae9-ae133191476d","Type":"ContainerStarted","Data":"2a4481a18e7aed734ae4a2d67eeeb008d6aeba24bc7223a49b0d6a3791cd0e5c"} Mar 13 12:36:51.704721 master-0 kubenswrapper[4143]: I0313 12:36:51.704617 4143 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=6.7045574089999995 podStartE2EDuration="6.704557409s" podCreationTimestamp="2026-03-13 12:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:36:51.703864847 +0000 UTC m=+117.451009191" watchObservedRunningTime="2026-03-13 12:36:51.704557409 +0000 UTC m=+117.451701753" Mar 13 12:36:51.754131 master-0 kubenswrapper[4143]: I0313 12:36:51.754066 4143 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" podStartSLOduration=2.975520082 podStartE2EDuration="22.754047404s" podCreationTimestamp="2026-03-13 12:36:29 +0000 UTC" firstStartedPulling="2026-03-13 12:36:31.065597305 +0000 UTC m=+96.812741619" lastFinishedPulling="2026-03-13 12:36:50.844124617 +0000 UTC m=+116.591268941" observedRunningTime="2026-03-13 12:36:51.753747279 +0000 UTC m=+117.500891623" watchObservedRunningTime="2026-03-13 12:36:51.754047404 +0000 UTC m=+117.501191738" Mar 13 12:36:51.864531 master-0 kubenswrapper[4143]: I0313 12:36:51.864038 4143 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-qg8q5" podStartSLOduration=2.734351747 podStartE2EDuration="17.864017555s" podCreationTimestamp="2026-03-13 12:36:34 +0000 UTC" firstStartedPulling="2026-03-13 12:36:35.747206269 +0000 UTC m=+101.494350593" lastFinishedPulling="2026-03-13 12:36:50.876872077 +0000 UTC m=+116.624016401" observedRunningTime="2026-03-13 12:36:51.811185871 +0000 UTC m=+117.558330205" watchObservedRunningTime="2026-03-13 12:36:51.864017555 +0000 UTC m=+117.611161879" Mar 13 12:36:52.081869 master-0 kubenswrapper[4143]: I0313 12:36:52.081837 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:52.081972 master-0 kubenswrapper[4143]: I0313 12:36:52.081837 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:52.082013 master-0 kubenswrapper[4143]: E0313 12:36:52.081962 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:52.082044 master-0 kubenswrapper[4143]: E0313 12:36:52.082029 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:36:52.696407 master-0 kubenswrapper[4143]: I0313 12:36:52.696025 4143 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="1e34a2d26492b3df232459c166da8fc0ebb8dbb2c47bdf38857a1fe49a541e66" exitCode=0 Mar 13 12:36:52.697481 master-0 kubenswrapper[4143]: I0313 12:36:52.696120 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78p2k" event={"ID":"152689b1-5875-4a9a-bb25-bee858523168","Type":"ContainerDied","Data":"1e34a2d26492b3df232459c166da8fc0ebb8dbb2c47bdf38857a1fe49a541e66"} Mar 13 12:36:52.702508 master-0 kubenswrapper[4143]: I0313 12:36:52.702442 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerStarted","Data":"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500"} Mar 13 12:36:52.702842 master-0 kubenswrapper[4143]: I0313 12:36:52.702514 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerStarted","Data":"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793"} Mar 13 12:36:52.702842 master-0 kubenswrapper[4143]: I0313 12:36:52.702535 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerStarted","Data":"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb"} Mar 13 12:36:52.702842 master-0 kubenswrapper[4143]: I0313 12:36:52.702554 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerStarted","Data":"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f"} Mar 13 12:36:52.702842 master-0 kubenswrapper[4143]: I0313 12:36:52.702570 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerStarted","Data":"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9"} Mar 13 12:36:52.702842 master-0 kubenswrapper[4143]: I0313 12:36:52.702588 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerStarted","Data":"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071"} Mar 13 12:36:53.715294 master-0 kubenswrapper[4143]: I0313 12:36:53.715211 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78p2k" event={"ID":"152689b1-5875-4a9a-bb25-bee858523168","Type":"ContainerStarted","Data":"b1126d18e847ed01cb89e67529ecaa5779874235edf97f8886762aa2bb31fdcd"} Mar 13 12:36:53.739828 master-0 kubenswrapper[4143]: I0313 12:36:53.739690 4143 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-78p2k" podStartSLOduration=4.1251324799999995 podStartE2EDuration="37.739664434s" podCreationTimestamp="2026-03-13 12:36:16 +0000 UTC" firstStartedPulling="2026-03-13 12:36:17.148285373 +0000 UTC m=+82.895429697" lastFinishedPulling="2026-03-13 12:36:50.762817307 +0000 UTC m=+116.509961651" observedRunningTime="2026-03-13 12:36:53.739320047 +0000 UTC m=+119.486464441" watchObservedRunningTime="2026-03-13 12:36:53.739664434 +0000 UTC m=+119.486808798" Mar 13 12:36:54.081894 master-0 kubenswrapper[4143]: I0313 12:36:54.081808 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:54.082243 master-0 kubenswrapper[4143]: I0313 12:36:54.081926 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:54.082243 master-0 kubenswrapper[4143]: E0313 12:36:54.082005 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:36:54.082472 master-0 kubenswrapper[4143]: E0313 12:36:54.082245 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:54.680443 master-0 kubenswrapper[4143]: I0313 12:36:54.680271 4143 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fn8qb"] Mar 13 12:36:54.723525 master-0 kubenswrapper[4143]: I0313 12:36:54.723471 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerStarted","Data":"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc"} Mar 13 12:36:54.886543 master-0 kubenswrapper[4143]: E0313 12:36:54.886457 4143 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 13 12:36:55.072326 master-0 kubenswrapper[4143]: E0313 12:36:55.072231 4143 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 12:36:56.081813 master-0 kubenswrapper[4143]: I0313 12:36:56.081775 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:56.082610 master-0 kubenswrapper[4143]: I0313 12:36:56.081829 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:56.082791 master-0 kubenswrapper[4143]: E0313 12:36:56.082736 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:36:56.082863 master-0 kubenswrapper[4143]: E0313 12:36:56.082567 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:57.740713 master-0 kubenswrapper[4143]: I0313 12:36:57.740398 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerStarted","Data":"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559"} Mar 13 12:36:57.741466 master-0 kubenswrapper[4143]: I0313 12:36:57.740883 4143 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="ovn-controller" containerID="cri-o://57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071" gracePeriod=30 Mar 13 12:36:57.741466 master-0 kubenswrapper[4143]: I0313 12:36:57.740922 4143 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="sbdb" containerID="cri-o://26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc" gracePeriod=30 Mar 13 12:36:57.741466 master-0 kubenswrapper[4143]: I0313 12:36:57.740971 4143 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb" gracePeriod=30 Mar 13 12:36:57.741466 master-0 kubenswrapper[4143]: I0313 12:36:57.740896 4143 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="northd" containerID="cri-o://187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793" gracePeriod=30 Mar 13 12:36:57.741466 master-0 kubenswrapper[4143]: I0313 12:36:57.741043 4143 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="ovn-acl-logging" containerID="cri-o://08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9" gracePeriod=30 Mar 13 12:36:57.741466 master-0 kubenswrapper[4143]: I0313 12:36:57.741088 4143 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:57.741466 master-0 kubenswrapper[4143]: I0313 12:36:57.741021 4143 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="kube-rbac-proxy-node" containerID="cri-o://37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f" gracePeriod=30 Mar 13 12:36:57.741466 master-0 kubenswrapper[4143]: I0313 12:36:57.740995 4143 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="nbdb" containerID="cri-o://dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500" gracePeriod=30 Mar 13 12:36:57.741466 master-0 kubenswrapper[4143]: I0313 12:36:57.741322 4143 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:57.741466 master-0 kubenswrapper[4143]: I0313 12:36:57.741429 4143 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:57.745327 master-0 kubenswrapper[4143]: E0313 12:36:57.745270 4143 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 13 12:36:57.745462 master-0 kubenswrapper[4143]: E0313 12:36:57.745261 4143 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 13 12:36:57.747120 master-0 kubenswrapper[4143]: E0313 12:36:57.747027 4143 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 13 12:36:57.751512 master-0 kubenswrapper[4143]: E0313 12:36:57.749846 4143 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 13 12:36:57.751512 master-0 kubenswrapper[4143]: E0313 12:36:57.749939 4143 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="nbdb" Mar 13 12:36:57.754869 master-0 kubenswrapper[4143]: E0313 12:36:57.753195 4143 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 13 12:36:57.758018 master-0 kubenswrapper[4143]: E0313 12:36:57.757937 4143 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 13 12:36:57.758273 master-0 kubenswrapper[4143]: E0313 12:36:57.758032 4143 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="sbdb" Mar 13 12:36:57.761610 master-0 kubenswrapper[4143]: I0313 12:36:57.761503 4143 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="ovnkube-controller" containerID="cri-o://f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559" gracePeriod=30 Mar 13 12:36:57.780815 master-0 kubenswrapper[4143]: I0313 12:36:57.780753 4143 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" podStartSLOduration=8.848403335 podStartE2EDuration="28.780737496s" podCreationTimestamp="2026-03-13 12:36:29 +0000 UTC" firstStartedPulling="2026-03-13 12:36:30.900000224 +0000 UTC m=+96.647144548" lastFinishedPulling="2026-03-13 12:36:50.832334385 +0000 UTC m=+116.579478709" observedRunningTime="2026-03-13 12:36:57.779542016 +0000 UTC m=+123.526686350" watchObservedRunningTime="2026-03-13 12:36:57.780737496 +0000 UTC m=+123.527881820" Mar 13 12:36:58.081633 master-0 kubenswrapper[4143]: I0313 12:36:58.081564 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:36:58.081903 master-0 kubenswrapper[4143]: E0313 12:36:58.081727 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:36:58.081903 master-0 kubenswrapper[4143]: I0313 12:36:58.081831 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:36:58.082039 master-0 kubenswrapper[4143]: E0313 12:36:58.081982 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:36:58.304418 master-0 kubenswrapper[4143]: I0313 12:36:58.304273 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn8qb_7f5a7196-2bfe-4fc7-820f-c6d17bb24f29/ovnkube-controller/0.log" Mar 13 12:36:58.306645 master-0 kubenswrapper[4143]: I0313 12:36:58.306595 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn8qb_7f5a7196-2bfe-4fc7-820f-c6d17bb24f29/kube-rbac-proxy-ovn-metrics/0.log" Mar 13 12:36:58.307576 master-0 kubenswrapper[4143]: I0313 12:36:58.307520 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn8qb_7f5a7196-2bfe-4fc7-820f-c6d17bb24f29/kube-rbac-proxy-node/0.log" Mar 13 12:36:58.308235 master-0 kubenswrapper[4143]: I0313 12:36:58.308188 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn8qb_7f5a7196-2bfe-4fc7-820f-c6d17bb24f29/ovn-acl-logging/0.log" Mar 13 12:36:58.312493 master-0 kubenswrapper[4143]: I0313 12:36:58.309649 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn8qb_7f5a7196-2bfe-4fc7-820f-c6d17bb24f29/ovn-controller/0.log" Mar 13 12:36:58.313229 master-0 kubenswrapper[4143]: I0313 12:36:58.313174 4143 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: I0313 12:36:58.392023 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h8fwp"] Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: E0313 12:36:58.392194 4143 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="nbdb" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: I0313 12:36:58.392222 4143 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="nbdb" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: E0313 12:36:58.392236 4143 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="ovn-acl-logging" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: I0313 12:36:58.392260 4143 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="ovn-acl-logging" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: E0313 12:36:58.392273 4143 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="northd" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: I0313 12:36:58.392284 4143 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="northd" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: E0313 12:36:58.392294 4143 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="kubecfg-setup" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: I0313 12:36:58.392305 4143 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="kubecfg-setup" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: E0313 12:36:58.392315 4143 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="ovn-controller" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: I0313 12:36:58.392325 4143 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="ovn-controller" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: E0313 12:36:58.392336 4143 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="kube-rbac-proxy-node" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: I0313 12:36:58.392346 4143 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="kube-rbac-proxy-node" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: E0313 12:36:58.392356 4143 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="ovnkube-controller" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: I0313 12:36:58.392366 4143 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="ovnkube-controller" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: E0313 12:36:58.392375 4143 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="sbdb" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: I0313 12:36:58.392386 4143 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="sbdb" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: E0313 12:36:58.392398 4143 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="kube-rbac-proxy-ovn-metrics" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: I0313 12:36:58.392409 4143 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="kube-rbac-proxy-ovn-metrics" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: I0313 12:36:58.392475 4143 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="ovn-acl-logging" Mar 13 12:36:58.392468 master-0 kubenswrapper[4143]: I0313 12:36:58.392494 4143 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="ovnkube-controller" Mar 13 12:36:58.393587 master-0 kubenswrapper[4143]: I0313 12:36:58.392505 4143 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="nbdb" Mar 13 12:36:58.393587 master-0 kubenswrapper[4143]: I0313 12:36:58.392517 4143 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="ovn-controller" Mar 13 12:36:58.393587 master-0 kubenswrapper[4143]: I0313 12:36:58.392526 4143 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="kube-rbac-proxy-ovn-metrics" Mar 13 12:36:58.393587 master-0 kubenswrapper[4143]: I0313 12:36:58.392548 4143 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="sbdb" Mar 13 12:36:58.393587 master-0 kubenswrapper[4143]: I0313 12:36:58.392558 4143 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="northd" Mar 13 12:36:58.393587 master-0 kubenswrapper[4143]: I0313 12:36:58.392568 4143 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerName="kube-rbac-proxy-node" Mar 13 12:36:58.393587 master-0 kubenswrapper[4143]: I0313 12:36:58.393586 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.496329 master-0 kubenswrapper[4143]: I0313 12:36:58.496253 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-systemd-units\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.496329 master-0 kubenswrapper[4143]: I0313 12:36:58.496334 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-systemd\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496379 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovn-node-metrics-cert\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496393 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496414 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-cni-bin\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496446 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bz6x\" (UniqueName: \"kubernetes.io/projected/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-kube-api-access-2bz6x\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496487 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-node-log\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496533 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496549 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-node-log" (OuterVolumeSpecName: "node-log") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496628 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-var-lib-openvswitch\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496657 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-run-ovn-kubernetes\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496678 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-openvswitch\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496706 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovnkube-config\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496730 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-slash\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496754 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-run-netns\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496772 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-etc-openvswitch\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496792 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-var-lib-cni-networks-ovn-kubernetes\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496813 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-cni-netd\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496837 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-ovn\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.497541 master-0 kubenswrapper[4143]: I0313 12:36:58.496867 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-log-socket\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.498438 master-0 kubenswrapper[4143]: I0313 12:36:58.496888 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovnkube-script-lib\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.498438 master-0 kubenswrapper[4143]: I0313 12:36:58.496909 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-kubelet\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.498438 master-0 kubenswrapper[4143]: I0313 12:36:58.496931 4143 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-env-overrides\") pod \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\" (UID: \"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29\") " Mar 13 12:36:58.498438 master-0 kubenswrapper[4143]: I0313 12:36:58.496927 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:58.498438 master-0 kubenswrapper[4143]: I0313 12:36:58.497004 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-slash" (OuterVolumeSpecName: "host-slash") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:58.498438 master-0 kubenswrapper[4143]: I0313 12:36:58.496999 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:58.498438 master-0 kubenswrapper[4143]: I0313 12:36:58.497040 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:58.498438 master-0 kubenswrapper[4143]: I0313 12:36:58.497027 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-bin\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.498438 master-0 kubenswrapper[4143]: I0313 12:36:58.497074 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:58.498438 master-0 kubenswrapper[4143]: I0313 12:36:58.497100 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-var-lib-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.498438 master-0 kubenswrapper[4143]: I0313 12:36:58.497100 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:58.498438 master-0 kubenswrapper[4143]: I0313 12:36:58.497109 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-log-socket" (OuterVolumeSpecName: "log-socket") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:58.498438 master-0 kubenswrapper[4143]: I0313 12:36:58.497125 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-node-log\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.498438 master-0 kubenswrapper[4143]: I0313 12:36:58.497158 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:58.498438 master-0 kubenswrapper[4143]: I0313 12:36:58.497160 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497179 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497178 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497192 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497210 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-log-socket\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497199 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497269 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-config\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497359 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-script-lib\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497444 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j5fc\" (UniqueName: \"kubernetes.io/projected/d6226325-c4d9-497e-8d19-a71adc66c5ac-kube-api-access-4j5fc\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497496 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-systemd-units\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497558 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497659 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497659 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-kubelet\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497743 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-slash\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497850 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-netns\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497884 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-ovn\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.498957 master-0 kubenswrapper[4143]: I0313 12:36:58.497917 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-env-overrides\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.497939 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovn-node-metrics-cert\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.497989 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-etc-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498033 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498056 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498078 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-netd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498104 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-systemd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498193 4143 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498208 4143 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498221 4143 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498236 4143 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498247 4143 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498264 4143 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498274 4143 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498285 4143 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498296 4143 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498307 4143 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498552 4143 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498664 4143 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-node-log\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498682 4143 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498693 4143 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498704 4143 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.499552 master-0 kubenswrapper[4143]: I0313 12:36:58.498715 4143 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.500291 master-0 kubenswrapper[4143]: I0313 12:36:58.498725 4143 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.500291 master-0 kubenswrapper[4143]: I0313 12:36:58.499933 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-kube-api-access-2bz6x" (OuterVolumeSpecName: "kube-api-access-2bz6x") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "kube-api-access-2bz6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:36:58.501478 master-0 kubenswrapper[4143]: I0313 12:36:58.501425 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:36:58.501962 master-0 kubenswrapper[4143]: I0313 12:36:58.501921 4143 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" (UID: "7f5a7196-2bfe-4fc7-820f-c6d17bb24f29"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:36:58.599866 master-0 kubenswrapper[4143]: I0313 12:36:58.599603 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-systemd-units\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.599866 master-0 kubenswrapper[4143]: I0313 12:36:58.599712 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-kubelet\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.599866 master-0 kubenswrapper[4143]: I0313 12:36:58.599732 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-systemd-units\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.599866 master-0 kubenswrapper[4143]: I0313 12:36:58.599761 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-slash\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.599866 master-0 kubenswrapper[4143]: I0313 12:36:58.599818 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-netns\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.599866 master-0 kubenswrapper[4143]: I0313 12:36:58.599833 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-slash\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.599866 master-0 kubenswrapper[4143]: I0313 12:36:58.599853 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-ovn\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.599917 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-netns\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.599918 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-kubelet\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.599954 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-env-overrides\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.599974 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-ovn\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.600000 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovn-node-metrics-cert\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.600047 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-etc-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.600087 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.600121 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-systemd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.600196 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.600247 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-netd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.600281 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-bin\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.600316 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-var-lib-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.600350 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-node-log\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.600385 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.600422 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-log-socket\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.600450 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-config\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.600539 master-0 kubenswrapper[4143]: I0313 12:36:58.600482 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-script-lib\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.601984 master-0 kubenswrapper[4143]: I0313 12:36:58.600568 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j5fc\" (UniqueName: \"kubernetes.io/projected/d6226325-c4d9-497e-8d19-a71adc66c5ac-kube-api-access-4j5fc\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.601984 master-0 kubenswrapper[4143]: I0313 12:36:58.600630 4143 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.601984 master-0 kubenswrapper[4143]: I0313 12:36:58.600660 4143 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.601984 master-0 kubenswrapper[4143]: I0313 12:36:58.600682 4143 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bz6x\" (UniqueName: \"kubernetes.io/projected/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29-kube-api-access-2bz6x\") on node \"master-0\" DevicePath \"\"" Mar 13 12:36:58.601984 master-0 kubenswrapper[4143]: I0313 12:36:58.600882 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-env-overrides\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.601984 master-0 kubenswrapper[4143]: I0313 12:36:58.600966 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-bin\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.601984 master-0 kubenswrapper[4143]: I0313 12:36:58.601242 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-var-lib-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.601984 master-0 kubenswrapper[4143]: I0313 12:36:58.601300 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-node-log\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.601984 master-0 kubenswrapper[4143]: I0313 12:36:58.601344 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.601984 master-0 kubenswrapper[4143]: I0313 12:36:58.601391 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-log-socket\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.601984 master-0 kubenswrapper[4143]: I0313 12:36:58.601644 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-systemd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.601984 master-0 kubenswrapper[4143]: I0313 12:36:58.601720 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.601984 master-0 kubenswrapper[4143]: I0313 12:36:58.601758 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.601984 master-0 kubenswrapper[4143]: I0313 12:36:58.601808 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-etc-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.601984 master-0 kubenswrapper[4143]: I0313 12:36:58.601769 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-netd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.603174 master-0 kubenswrapper[4143]: I0313 12:36:58.603083 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-script-lib\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.603402 master-0 kubenswrapper[4143]: I0313 12:36:58.603341 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-config\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.606513 master-0 kubenswrapper[4143]: I0313 12:36:58.606414 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovn-node-metrics-cert\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.630943 master-0 kubenswrapper[4143]: I0313 12:36:58.630855 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j5fc\" (UniqueName: \"kubernetes.io/projected/d6226325-c4d9-497e-8d19-a71adc66c5ac-kube-api-access-4j5fc\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.707946 master-0 kubenswrapper[4143]: I0313 12:36:58.707866 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:36:58.729415 master-0 kubenswrapper[4143]: W0313 12:36:58.729330 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6226325_c4d9_497e_8d19_a71adc66c5ac.slice/crio-7066c2bb7f28cfd07ac1eb011cdc9849969ed5f37788da395910309c70481aa9 WatchSource:0}: Error finding container 7066c2bb7f28cfd07ac1eb011cdc9849969ed5f37788da395910309c70481aa9: Status 404 returned error can't find the container with id 7066c2bb7f28cfd07ac1eb011cdc9849969ed5f37788da395910309c70481aa9 Mar 13 12:36:58.746328 master-0 kubenswrapper[4143]: I0313 12:36:58.746276 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn8qb_7f5a7196-2bfe-4fc7-820f-c6d17bb24f29/ovnkube-controller/0.log" Mar 13 12:36:58.748892 master-0 kubenswrapper[4143]: I0313 12:36:58.748774 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn8qb_7f5a7196-2bfe-4fc7-820f-c6d17bb24f29/kube-rbac-proxy-ovn-metrics/0.log" Mar 13 12:36:58.750147 master-0 kubenswrapper[4143]: I0313 12:36:58.750051 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn8qb_7f5a7196-2bfe-4fc7-820f-c6d17bb24f29/kube-rbac-proxy-node/0.log" Mar 13 12:36:58.751006 master-0 kubenswrapper[4143]: I0313 12:36:58.750968 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn8qb_7f5a7196-2bfe-4fc7-820f-c6d17bb24f29/ovn-acl-logging/0.log" Mar 13 12:36:58.751535 master-0 kubenswrapper[4143]: I0313 12:36:58.751515 4143 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fn8qb_7f5a7196-2bfe-4fc7-820f-c6d17bb24f29/ovn-controller/0.log" Mar 13 12:36:58.751950 master-0 kubenswrapper[4143]: I0313 12:36:58.751906 4143 generic.go:334] "Generic (PLEG): container finished" podID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerID="f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559" exitCode=1 Mar 13 12:36:58.752018 master-0 kubenswrapper[4143]: I0313 12:36:58.751952 4143 generic.go:334] "Generic (PLEG): container finished" podID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerID="26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc" exitCode=0 Mar 13 12:36:58.752018 master-0 kubenswrapper[4143]: I0313 12:36:58.751969 4143 generic.go:334] "Generic (PLEG): container finished" podID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerID="dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500" exitCode=0 Mar 13 12:36:58.752018 master-0 kubenswrapper[4143]: I0313 12:36:58.751982 4143 generic.go:334] "Generic (PLEG): container finished" podID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerID="187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793" exitCode=0 Mar 13 12:36:58.752018 master-0 kubenswrapper[4143]: I0313 12:36:58.751991 4143 generic.go:334] "Generic (PLEG): container finished" podID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerID="40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb" exitCode=143 Mar 13 12:36:58.752018 master-0 kubenswrapper[4143]: I0313 12:36:58.752007 4143 generic.go:334] "Generic (PLEG): container finished" podID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerID="37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f" exitCode=143 Mar 13 12:36:58.752229 master-0 kubenswrapper[4143]: I0313 12:36:58.752022 4143 generic.go:334] "Generic (PLEG): container finished" podID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerID="08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9" exitCode=143 Mar 13 12:36:58.752229 master-0 kubenswrapper[4143]: I0313 12:36:58.752035 4143 generic.go:334] "Generic (PLEG): container finished" podID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" containerID="57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071" exitCode=143 Mar 13 12:36:58.752229 master-0 kubenswrapper[4143]: I0313 12:36:58.751979 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerDied","Data":"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559"} Mar 13 12:36:58.752229 master-0 kubenswrapper[4143]: I0313 12:36:58.752099 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerDied","Data":"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc"} Mar 13 12:36:58.752229 master-0 kubenswrapper[4143]: I0313 12:36:58.752064 4143 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" Mar 13 12:36:58.752229 master-0 kubenswrapper[4143]: I0313 12:36:58.752175 4143 scope.go:117] "RemoveContainer" containerID="f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559" Mar 13 12:36:58.752427 master-0 kubenswrapper[4143]: I0313 12:36:58.752115 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerDied","Data":"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500"} Mar 13 12:36:58.752427 master-0 kubenswrapper[4143]: I0313 12:36:58.752304 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerDied","Data":"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793"} Mar 13 12:36:58.752427 master-0 kubenswrapper[4143]: I0313 12:36:58.752349 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerDied","Data":"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb"} Mar 13 12:36:58.752427 master-0 kubenswrapper[4143]: I0313 12:36:58.752379 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerDied","Data":"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f"} Mar 13 12:36:58.752798 master-0 kubenswrapper[4143]: I0313 12:36:58.752408 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9"} Mar 13 12:36:58.752798 master-0 kubenswrapper[4143]: I0313 12:36:58.752783 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071"} Mar 13 12:36:58.752883 master-0 kubenswrapper[4143]: I0313 12:36:58.752806 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9"} Mar 13 12:36:58.752883 master-0 kubenswrapper[4143]: I0313 12:36:58.752867 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerDied","Data":"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9"} Mar 13 12:36:58.752957 master-0 kubenswrapper[4143]: I0313 12:36:58.752902 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559"} Mar 13 12:36:58.752957 master-0 kubenswrapper[4143]: I0313 12:36:58.752923 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc"} Mar 13 12:36:58.753024 master-0 kubenswrapper[4143]: I0313 12:36:58.752940 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500"} Mar 13 12:36:58.753024 master-0 kubenswrapper[4143]: I0313 12:36:58.753008 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793"} Mar 13 12:36:58.753103 master-0 kubenswrapper[4143]: I0313 12:36:58.753055 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb"} Mar 13 12:36:58.753103 master-0 kubenswrapper[4143]: I0313 12:36:58.753074 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f"} Mar 13 12:36:58.753103 master-0 kubenswrapper[4143]: I0313 12:36:58.753091 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9"} Mar 13 12:36:58.753233 master-0 kubenswrapper[4143]: I0313 12:36:58.753107 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071"} Mar 13 12:36:58.753233 master-0 kubenswrapper[4143]: I0313 12:36:58.753124 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9"} Mar 13 12:36:58.753306 master-0 kubenswrapper[4143]: I0313 12:36:58.753226 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerDied","Data":"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071"} Mar 13 12:36:58.753348 master-0 kubenswrapper[4143]: I0313 12:36:58.753317 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559"} Mar 13 12:36:58.753385 master-0 kubenswrapper[4143]: I0313 12:36:58.753343 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc"} Mar 13 12:36:58.753385 master-0 kubenswrapper[4143]: I0313 12:36:58.753361 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500"} Mar 13 12:36:58.753385 master-0 kubenswrapper[4143]: I0313 12:36:58.753377 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793"} Mar 13 12:36:58.753491 master-0 kubenswrapper[4143]: I0313 12:36:58.753393 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb"} Mar 13 12:36:58.753491 master-0 kubenswrapper[4143]: I0313 12:36:58.753408 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f"} Mar 13 12:36:58.753491 master-0 kubenswrapper[4143]: I0313 12:36:58.753424 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9"} Mar 13 12:36:58.753491 master-0 kubenswrapper[4143]: I0313 12:36:58.753444 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071"} Mar 13 12:36:58.753491 master-0 kubenswrapper[4143]: I0313 12:36:58.753460 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9"} Mar 13 12:36:58.753491 master-0 kubenswrapper[4143]: I0313 12:36:58.753486 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fn8qb" event={"ID":"7f5a7196-2bfe-4fc7-820f-c6d17bb24f29","Type":"ContainerDied","Data":"7bb07b8ac3a9143900e44c8646ee6fb8d832847a79c050ce5b93154ab39c7aad"} Mar 13 12:36:58.753701 master-0 kubenswrapper[4143]: I0313 12:36:58.753517 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559"} Mar 13 12:36:58.753701 master-0 kubenswrapper[4143]: I0313 12:36:58.753536 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc"} Mar 13 12:36:58.753701 master-0 kubenswrapper[4143]: I0313 12:36:58.753552 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500"} Mar 13 12:36:58.753701 master-0 kubenswrapper[4143]: I0313 12:36:58.753568 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793"} Mar 13 12:36:58.753701 master-0 kubenswrapper[4143]: I0313 12:36:58.753584 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb"} Mar 13 12:36:58.753701 master-0 kubenswrapper[4143]: I0313 12:36:58.753599 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f"} Mar 13 12:36:58.753701 master-0 kubenswrapper[4143]: I0313 12:36:58.753616 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9"} Mar 13 12:36:58.753701 master-0 kubenswrapper[4143]: I0313 12:36:58.753631 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071"} Mar 13 12:36:58.753701 master-0 kubenswrapper[4143]: I0313 12:36:58.753647 4143 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9"} Mar 13 12:36:58.754117 master-0 kubenswrapper[4143]: I0313 12:36:58.753767 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" event={"ID":"d6226325-c4d9-497e-8d19-a71adc66c5ac","Type":"ContainerStarted","Data":"7066c2bb7f28cfd07ac1eb011cdc9849969ed5f37788da395910309c70481aa9"} Mar 13 12:36:58.810328 master-0 kubenswrapper[4143]: I0313 12:36:58.810248 4143 scope.go:117] "RemoveContainer" containerID="26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc" Mar 13 12:36:58.831096 master-0 kubenswrapper[4143]: I0313 12:36:58.830918 4143 scope.go:117] "RemoveContainer" containerID="dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500" Mar 13 12:36:58.844121 master-0 kubenswrapper[4143]: I0313 12:36:58.844092 4143 scope.go:117] "RemoveContainer" containerID="187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793" Mar 13 12:36:58.849227 master-0 kubenswrapper[4143]: I0313 12:36:58.849116 4143 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fn8qb"] Mar 13 12:36:58.851832 master-0 kubenswrapper[4143]: I0313 12:36:58.851756 4143 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fn8qb"] Mar 13 12:36:58.859724 master-0 kubenswrapper[4143]: I0313 12:36:58.859612 4143 scope.go:117] "RemoveContainer" containerID="40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb" Mar 13 12:36:58.872933 master-0 kubenswrapper[4143]: I0313 12:36:58.872767 4143 scope.go:117] "RemoveContainer" containerID="37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f" Mar 13 12:36:58.885564 master-0 kubenswrapper[4143]: I0313 12:36:58.885333 4143 scope.go:117] "RemoveContainer" containerID="08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9" Mar 13 12:36:58.938347 master-0 kubenswrapper[4143]: I0313 12:36:58.938301 4143 scope.go:117] "RemoveContainer" containerID="57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071" Mar 13 12:36:58.951970 master-0 kubenswrapper[4143]: I0313 12:36:58.951919 4143 scope.go:117] "RemoveContainer" containerID="5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9" Mar 13 12:36:58.961876 master-0 kubenswrapper[4143]: I0313 12:36:58.961833 4143 scope.go:117] "RemoveContainer" containerID="f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559" Mar 13 12:36:58.962613 master-0 kubenswrapper[4143]: E0313 12:36:58.962579 4143 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559\": container with ID starting with f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559 not found: ID does not exist" containerID="f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559" Mar 13 12:36:58.962720 master-0 kubenswrapper[4143]: I0313 12:36:58.962613 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559"} err="failed to get container status \"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559\": rpc error: code = NotFound desc = could not find container \"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559\": container with ID starting with f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559 not found: ID does not exist" Mar 13 12:36:58.962720 master-0 kubenswrapper[4143]: I0313 12:36:58.962641 4143 scope.go:117] "RemoveContainer" containerID="26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc" Mar 13 12:36:58.963058 master-0 kubenswrapper[4143]: E0313 12:36:58.963036 4143 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc\": container with ID starting with 26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc not found: ID does not exist" containerID="26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc" Mar 13 12:36:58.963121 master-0 kubenswrapper[4143]: I0313 12:36:58.963061 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc"} err="failed to get container status \"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc\": rpc error: code = NotFound desc = could not find container \"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc\": container with ID starting with 26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc not found: ID does not exist" Mar 13 12:36:58.963121 master-0 kubenswrapper[4143]: I0313 12:36:58.963077 4143 scope.go:117] "RemoveContainer" containerID="dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500" Mar 13 12:36:58.964513 master-0 kubenswrapper[4143]: E0313 12:36:58.964452 4143 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500\": container with ID starting with dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500 not found: ID does not exist" containerID="dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500" Mar 13 12:36:58.964563 master-0 kubenswrapper[4143]: I0313 12:36:58.964523 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500"} err="failed to get container status \"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500\": rpc error: code = NotFound desc = could not find container \"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500\": container with ID starting with dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500 not found: ID does not exist" Mar 13 12:36:58.964612 master-0 kubenswrapper[4143]: I0313 12:36:58.964574 4143 scope.go:117] "RemoveContainer" containerID="187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793" Mar 13 12:36:58.964989 master-0 kubenswrapper[4143]: E0313 12:36:58.964951 4143 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793\": container with ID starting with 187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793 not found: ID does not exist" containerID="187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793" Mar 13 12:36:58.965034 master-0 kubenswrapper[4143]: I0313 12:36:58.964987 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793"} err="failed to get container status \"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793\": rpc error: code = NotFound desc = could not find container \"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793\": container with ID starting with 187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793 not found: ID does not exist" Mar 13 12:36:58.965034 master-0 kubenswrapper[4143]: I0313 12:36:58.965009 4143 scope.go:117] "RemoveContainer" containerID="40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb" Mar 13 12:36:58.966331 master-0 kubenswrapper[4143]: E0313 12:36:58.966135 4143 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb\": container with ID starting with 40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb not found: ID does not exist" containerID="40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb" Mar 13 12:36:58.966385 master-0 kubenswrapper[4143]: I0313 12:36:58.966327 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb"} err="failed to get container status \"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb\": rpc error: code = NotFound desc = could not find container \"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb\": container with ID starting with 40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb not found: ID does not exist" Mar 13 12:36:58.966385 master-0 kubenswrapper[4143]: I0313 12:36:58.966345 4143 scope.go:117] "RemoveContainer" containerID="37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f" Mar 13 12:36:58.966653 master-0 kubenswrapper[4143]: E0313 12:36:58.966625 4143 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f\": container with ID starting with 37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f not found: ID does not exist" containerID="37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f" Mar 13 12:36:58.966723 master-0 kubenswrapper[4143]: I0313 12:36:58.966648 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f"} err="failed to get container status \"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f\": rpc error: code = NotFound desc = could not find container \"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f\": container with ID starting with 37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f not found: ID does not exist" Mar 13 12:36:58.966723 master-0 kubenswrapper[4143]: I0313 12:36:58.966717 4143 scope.go:117] "RemoveContainer" containerID="08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9" Mar 13 12:36:58.966960 master-0 kubenswrapper[4143]: E0313 12:36:58.966930 4143 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9\": container with ID starting with 08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9 not found: ID does not exist" containerID="08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9" Mar 13 12:36:58.967007 master-0 kubenswrapper[4143]: I0313 12:36:58.966956 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9"} err="failed to get container status \"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9\": rpc error: code = NotFound desc = could not find container \"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9\": container with ID starting with 08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9 not found: ID does not exist" Mar 13 12:36:58.967007 master-0 kubenswrapper[4143]: I0313 12:36:58.966972 4143 scope.go:117] "RemoveContainer" containerID="57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071" Mar 13 12:36:58.967436 master-0 kubenswrapper[4143]: E0313 12:36:58.967312 4143 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071\": container with ID starting with 57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071 not found: ID does not exist" containerID="57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071" Mar 13 12:36:58.967436 master-0 kubenswrapper[4143]: I0313 12:36:58.967357 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071"} err="failed to get container status \"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071\": rpc error: code = NotFound desc = could not find container \"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071\": container with ID starting with 57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071 not found: ID does not exist" Mar 13 12:36:58.967436 master-0 kubenswrapper[4143]: I0313 12:36:58.967387 4143 scope.go:117] "RemoveContainer" containerID="5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9" Mar 13 12:36:58.967992 master-0 kubenswrapper[4143]: E0313 12:36:58.967956 4143 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9\": container with ID starting with 5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9 not found: ID does not exist" containerID="5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9" Mar 13 12:36:58.967992 master-0 kubenswrapper[4143]: I0313 12:36:58.967982 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9"} err="failed to get container status \"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9\": rpc error: code = NotFound desc = could not find container \"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9\": container with ID starting with 5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9 not found: ID does not exist" Mar 13 12:36:58.968087 master-0 kubenswrapper[4143]: I0313 12:36:58.967999 4143 scope.go:117] "RemoveContainer" containerID="f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559" Mar 13 12:36:58.968331 master-0 kubenswrapper[4143]: I0313 12:36:58.968294 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559"} err="failed to get container status \"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559\": rpc error: code = NotFound desc = could not find container \"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559\": container with ID starting with f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559 not found: ID does not exist" Mar 13 12:36:58.968331 master-0 kubenswrapper[4143]: I0313 12:36:58.968322 4143 scope.go:117] "RemoveContainer" containerID="26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc" Mar 13 12:36:58.968629 master-0 kubenswrapper[4143]: I0313 12:36:58.968595 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc"} err="failed to get container status \"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc\": rpc error: code = NotFound desc = could not find container \"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc\": container with ID starting with 26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc not found: ID does not exist" Mar 13 12:36:58.968629 master-0 kubenswrapper[4143]: I0313 12:36:58.968617 4143 scope.go:117] "RemoveContainer" containerID="dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500" Mar 13 12:36:58.968991 master-0 kubenswrapper[4143]: I0313 12:36:58.968939 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500"} err="failed to get container status \"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500\": rpc error: code = NotFound desc = could not find container \"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500\": container with ID starting with dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500 not found: ID does not exist" Mar 13 12:36:58.968991 master-0 kubenswrapper[4143]: I0313 12:36:58.968985 4143 scope.go:117] "RemoveContainer" containerID="187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793" Mar 13 12:36:58.969356 master-0 kubenswrapper[4143]: I0313 12:36:58.969316 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793"} err="failed to get container status \"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793\": rpc error: code = NotFound desc = could not find container \"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793\": container with ID starting with 187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793 not found: ID does not exist" Mar 13 12:36:58.969356 master-0 kubenswrapper[4143]: I0313 12:36:58.969347 4143 scope.go:117] "RemoveContainer" containerID="40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb" Mar 13 12:36:58.969637 master-0 kubenswrapper[4143]: I0313 12:36:58.969601 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb"} err="failed to get container status \"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb\": rpc error: code = NotFound desc = could not find container \"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb\": container with ID starting with 40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb not found: ID does not exist" Mar 13 12:36:58.969637 master-0 kubenswrapper[4143]: I0313 12:36:58.969631 4143 scope.go:117] "RemoveContainer" containerID="37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f" Mar 13 12:36:58.970021 master-0 kubenswrapper[4143]: I0313 12:36:58.969983 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f"} err="failed to get container status \"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f\": rpc error: code = NotFound desc = could not find container \"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f\": container with ID starting with 37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f not found: ID does not exist" Mar 13 12:36:58.970021 master-0 kubenswrapper[4143]: I0313 12:36:58.970007 4143 scope.go:117] "RemoveContainer" containerID="08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9" Mar 13 12:36:58.970462 master-0 kubenswrapper[4143]: I0313 12:36:58.970424 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9"} err="failed to get container status \"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9\": rpc error: code = NotFound desc = could not find container \"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9\": container with ID starting with 08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9 not found: ID does not exist" Mar 13 12:36:58.970462 master-0 kubenswrapper[4143]: I0313 12:36:58.970455 4143 scope.go:117] "RemoveContainer" containerID="57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071" Mar 13 12:36:58.970756 master-0 kubenswrapper[4143]: I0313 12:36:58.970725 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071"} err="failed to get container status \"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071\": rpc error: code = NotFound desc = could not find container \"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071\": container with ID starting with 57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071 not found: ID does not exist" Mar 13 12:36:58.970756 master-0 kubenswrapper[4143]: I0313 12:36:58.970747 4143 scope.go:117] "RemoveContainer" containerID="5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9" Mar 13 12:36:58.971000 master-0 kubenswrapper[4143]: I0313 12:36:58.970971 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9"} err="failed to get container status \"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9\": rpc error: code = NotFound desc = could not find container \"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9\": container with ID starting with 5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9 not found: ID does not exist" Mar 13 12:36:58.971000 master-0 kubenswrapper[4143]: I0313 12:36:58.970992 4143 scope.go:117] "RemoveContainer" containerID="f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559" Mar 13 12:36:58.971257 master-0 kubenswrapper[4143]: I0313 12:36:58.971227 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559"} err="failed to get container status \"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559\": rpc error: code = NotFound desc = could not find container \"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559\": container with ID starting with f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559 not found: ID does not exist" Mar 13 12:36:58.971257 master-0 kubenswrapper[4143]: I0313 12:36:58.971249 4143 scope.go:117] "RemoveContainer" containerID="26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc" Mar 13 12:36:58.971533 master-0 kubenswrapper[4143]: I0313 12:36:58.971497 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc"} err="failed to get container status \"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc\": rpc error: code = NotFound desc = could not find container \"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc\": container with ID starting with 26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc not found: ID does not exist" Mar 13 12:36:58.971533 master-0 kubenswrapper[4143]: I0313 12:36:58.971523 4143 scope.go:117] "RemoveContainer" containerID="dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500" Mar 13 12:36:58.971788 master-0 kubenswrapper[4143]: I0313 12:36:58.971759 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500"} err="failed to get container status \"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500\": rpc error: code = NotFound desc = could not find container \"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500\": container with ID starting with dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500 not found: ID does not exist" Mar 13 12:36:58.971837 master-0 kubenswrapper[4143]: I0313 12:36:58.971786 4143 scope.go:117] "RemoveContainer" containerID="187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793" Mar 13 12:36:58.972106 master-0 kubenswrapper[4143]: I0313 12:36:58.972068 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793"} err="failed to get container status \"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793\": rpc error: code = NotFound desc = could not find container \"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793\": container with ID starting with 187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793 not found: ID does not exist" Mar 13 12:36:58.972106 master-0 kubenswrapper[4143]: I0313 12:36:58.972098 4143 scope.go:117] "RemoveContainer" containerID="40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb" Mar 13 12:36:58.972397 master-0 kubenswrapper[4143]: I0313 12:36:58.972362 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb"} err="failed to get container status \"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb\": rpc error: code = NotFound desc = could not find container \"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb\": container with ID starting with 40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb not found: ID does not exist" Mar 13 12:36:58.972397 master-0 kubenswrapper[4143]: I0313 12:36:58.972388 4143 scope.go:117] "RemoveContainer" containerID="37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f" Mar 13 12:36:58.972667 master-0 kubenswrapper[4143]: I0313 12:36:58.972638 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f"} err="failed to get container status \"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f\": rpc error: code = NotFound desc = could not find container \"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f\": container with ID starting with 37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f not found: ID does not exist" Mar 13 12:36:58.972667 master-0 kubenswrapper[4143]: I0313 12:36:58.972659 4143 scope.go:117] "RemoveContainer" containerID="08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9" Mar 13 12:36:58.972935 master-0 kubenswrapper[4143]: I0313 12:36:58.972902 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9"} err="failed to get container status \"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9\": rpc error: code = NotFound desc = could not find container \"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9\": container with ID starting with 08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9 not found: ID does not exist" Mar 13 12:36:58.972935 master-0 kubenswrapper[4143]: I0313 12:36:58.972928 4143 scope.go:117] "RemoveContainer" containerID="57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071" Mar 13 12:36:58.973224 master-0 kubenswrapper[4143]: I0313 12:36:58.973201 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071"} err="failed to get container status \"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071\": rpc error: code = NotFound desc = could not find container \"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071\": container with ID starting with 57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071 not found: ID does not exist" Mar 13 12:36:58.973271 master-0 kubenswrapper[4143]: I0313 12:36:58.973223 4143 scope.go:117] "RemoveContainer" containerID="5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9" Mar 13 12:36:58.973485 master-0 kubenswrapper[4143]: I0313 12:36:58.973455 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9"} err="failed to get container status \"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9\": rpc error: code = NotFound desc = could not find container \"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9\": container with ID starting with 5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9 not found: ID does not exist" Mar 13 12:36:58.973527 master-0 kubenswrapper[4143]: I0313 12:36:58.973476 4143 scope.go:117] "RemoveContainer" containerID="f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559" Mar 13 12:36:58.973818 master-0 kubenswrapper[4143]: I0313 12:36:58.973752 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559"} err="failed to get container status \"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559\": rpc error: code = NotFound desc = could not find container \"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559\": container with ID starting with f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559 not found: ID does not exist" Mar 13 12:36:58.973818 master-0 kubenswrapper[4143]: I0313 12:36:58.973809 4143 scope.go:117] "RemoveContainer" containerID="26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc" Mar 13 12:36:58.974053 master-0 kubenswrapper[4143]: I0313 12:36:58.974022 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc"} err="failed to get container status \"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc\": rpc error: code = NotFound desc = could not find container \"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc\": container with ID starting with 26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc not found: ID does not exist" Mar 13 12:36:58.974053 master-0 kubenswrapper[4143]: I0313 12:36:58.974044 4143 scope.go:117] "RemoveContainer" containerID="dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500" Mar 13 12:36:58.974379 master-0 kubenswrapper[4143]: I0313 12:36:58.974345 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500"} err="failed to get container status \"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500\": rpc error: code = NotFound desc = could not find container \"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500\": container with ID starting with dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500 not found: ID does not exist" Mar 13 12:36:58.974379 master-0 kubenswrapper[4143]: I0313 12:36:58.974368 4143 scope.go:117] "RemoveContainer" containerID="187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793" Mar 13 12:36:58.974656 master-0 kubenswrapper[4143]: I0313 12:36:58.974621 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793"} err="failed to get container status \"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793\": rpc error: code = NotFound desc = could not find container \"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793\": container with ID starting with 187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793 not found: ID does not exist" Mar 13 12:36:58.974656 master-0 kubenswrapper[4143]: I0313 12:36:58.974649 4143 scope.go:117] "RemoveContainer" containerID="40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb" Mar 13 12:36:58.974974 master-0 kubenswrapper[4143]: I0313 12:36:58.974941 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb"} err="failed to get container status \"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb\": rpc error: code = NotFound desc = could not find container \"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb\": container with ID starting with 40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb not found: ID does not exist" Mar 13 12:36:58.974974 master-0 kubenswrapper[4143]: I0313 12:36:58.974961 4143 scope.go:117] "RemoveContainer" containerID="37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f" Mar 13 12:36:58.975196 master-0 kubenswrapper[4143]: I0313 12:36:58.975169 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f"} err="failed to get container status \"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f\": rpc error: code = NotFound desc = could not find container \"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f\": container with ID starting with 37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f not found: ID does not exist" Mar 13 12:36:58.975242 master-0 kubenswrapper[4143]: I0313 12:36:58.975196 4143 scope.go:117] "RemoveContainer" containerID="08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9" Mar 13 12:36:58.975445 master-0 kubenswrapper[4143]: I0313 12:36:58.975415 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9"} err="failed to get container status \"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9\": rpc error: code = NotFound desc = could not find container \"08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9\": container with ID starting with 08ccdb90ac0437a98e0d0456f9b7d09f2d676c921b6aaf18c899a0864c1217d9 not found: ID does not exist" Mar 13 12:36:58.975445 master-0 kubenswrapper[4143]: I0313 12:36:58.975438 4143 scope.go:117] "RemoveContainer" containerID="57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071" Mar 13 12:36:58.975684 master-0 kubenswrapper[4143]: I0313 12:36:58.975655 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071"} err="failed to get container status \"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071\": rpc error: code = NotFound desc = could not find container \"57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071\": container with ID starting with 57dc4660cbf48746e762f15c513637ec0fc1b52c9466dd3c1d6abd72a80e4071 not found: ID does not exist" Mar 13 12:36:58.975684 master-0 kubenswrapper[4143]: I0313 12:36:58.975678 4143 scope.go:117] "RemoveContainer" containerID="5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9" Mar 13 12:36:58.975931 master-0 kubenswrapper[4143]: I0313 12:36:58.975909 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9"} err="failed to get container status \"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9\": rpc error: code = NotFound desc = could not find container \"5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9\": container with ID starting with 5d2c8326278b34b8ff59a9f9976ab41ef419174f1a3fbfc2a6b45ed48ed205d9 not found: ID does not exist" Mar 13 12:36:58.975931 master-0 kubenswrapper[4143]: I0313 12:36:58.975930 4143 scope.go:117] "RemoveContainer" containerID="f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559" Mar 13 12:36:58.976227 master-0 kubenswrapper[4143]: I0313 12:36:58.976201 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559"} err="failed to get container status \"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559\": rpc error: code = NotFound desc = could not find container \"f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559\": container with ID starting with f66c3d50d03d2de5dddb4f7e19f6c7cb669052375e408c2fed698a0935972559 not found: ID does not exist" Mar 13 12:36:58.976227 master-0 kubenswrapper[4143]: I0313 12:36:58.976224 4143 scope.go:117] "RemoveContainer" containerID="26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc" Mar 13 12:36:58.976464 master-0 kubenswrapper[4143]: I0313 12:36:58.976441 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc"} err="failed to get container status \"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc\": rpc error: code = NotFound desc = could not find container \"26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc\": container with ID starting with 26d60ce545e04baaff4e16e1c413ed5f6769b41cdf16aa60858627ecc65e60dc not found: ID does not exist" Mar 13 12:36:58.976514 master-0 kubenswrapper[4143]: I0313 12:36:58.976464 4143 scope.go:117] "RemoveContainer" containerID="dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500" Mar 13 12:36:58.976697 master-0 kubenswrapper[4143]: I0313 12:36:58.976677 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500"} err="failed to get container status \"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500\": rpc error: code = NotFound desc = could not find container \"dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500\": container with ID starting with dfcc7ba8e6f38cdbcccc13ac0bed6edf4235599731530fdc8f0e49180eb57500 not found: ID does not exist" Mar 13 12:36:58.976743 master-0 kubenswrapper[4143]: I0313 12:36:58.976695 4143 scope.go:117] "RemoveContainer" containerID="187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793" Mar 13 12:36:58.976894 master-0 kubenswrapper[4143]: I0313 12:36:58.976873 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793"} err="failed to get container status \"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793\": rpc error: code = NotFound desc = could not find container \"187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793\": container with ID starting with 187719fca94ffeedf3940d581714ca4d60af4cef7371e95398b16a016f337793 not found: ID does not exist" Mar 13 12:36:58.976938 master-0 kubenswrapper[4143]: I0313 12:36:58.976896 4143 scope.go:117] "RemoveContainer" containerID="40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb" Mar 13 12:36:58.977152 master-0 kubenswrapper[4143]: I0313 12:36:58.977112 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb"} err="failed to get container status \"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb\": rpc error: code = NotFound desc = could not find container \"40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb\": container with ID starting with 40f13b76f53fd96d83bd5edac5930c74f120050da3f2295ff338497ec409d0bb not found: ID does not exist" Mar 13 12:36:58.977195 master-0 kubenswrapper[4143]: I0313 12:36:58.977157 4143 scope.go:117] "RemoveContainer" containerID="37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f" Mar 13 12:36:58.977439 master-0 kubenswrapper[4143]: I0313 12:36:58.977405 4143 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f"} err="failed to get container status \"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f\": rpc error: code = NotFound desc = could not find container \"37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f\": container with ID starting with 37f543b502c1eb3f430a0cbf3af3840c0270b596e9345eda56bfccb93e79cb5f not found: ID does not exist" Mar 13 12:36:59.090454 master-0 kubenswrapper[4143]: I0313 12:36:59.089389 4143 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f5a7196-2bfe-4fc7-820f-c6d17bb24f29" path="/var/lib/kubelet/pods/7f5a7196-2bfe-4fc7-820f-c6d17bb24f29/volumes" Mar 13 12:36:59.096659 master-0 kubenswrapper[4143]: I0313 12:36:59.096615 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 13 12:36:59.760814 master-0 kubenswrapper[4143]: I0313 12:36:59.760713 4143 generic.go:334] "Generic (PLEG): container finished" podID="d6226325-c4d9-497e-8d19-a71adc66c5ac" containerID="cf1959de89eea014cb32ef2948333cb70b4954efbb9bc7376a990fcbbdb918ce" exitCode=0 Mar 13 12:36:59.761955 master-0 kubenswrapper[4143]: I0313 12:36:59.761587 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" event={"ID":"d6226325-c4d9-497e-8d19-a71adc66c5ac","Type":"ContainerDied","Data":"cf1959de89eea014cb32ef2948333cb70b4954efbb9bc7376a990fcbbdb918ce"} Mar 13 12:36:59.784902 master-0 kubenswrapper[4143]: I0313 12:36:59.784742 4143 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=0.784708329 podStartE2EDuration="784.708329ms" podCreationTimestamp="2026-03-13 12:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:36:59.783661801 +0000 UTC m=+125.530806125" watchObservedRunningTime="2026-03-13 12:36:59.784708329 +0000 UTC m=+125.531852693" Mar 13 12:37:00.073005 master-0 kubenswrapper[4143]: E0313 12:37:00.072757 4143 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 12:37:00.081758 master-0 kubenswrapper[4143]: I0313 12:37:00.081718 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:00.081854 master-0 kubenswrapper[4143]: I0313 12:37:00.081754 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:00.081854 master-0 kubenswrapper[4143]: E0313 12:37:00.081836 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:37:00.081997 master-0 kubenswrapper[4143]: E0313 12:37:00.081951 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:37:00.769443 master-0 kubenswrapper[4143]: I0313 12:37:00.769335 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" event={"ID":"d6226325-c4d9-497e-8d19-a71adc66c5ac","Type":"ContainerStarted","Data":"42d42c5157a58422e54eb24d4a55af4195cee9e73e05f504b6f3f6105d6df4b1"} Mar 13 12:37:00.769443 master-0 kubenswrapper[4143]: I0313 12:37:00.769411 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" event={"ID":"d6226325-c4d9-497e-8d19-a71adc66c5ac","Type":"ContainerStarted","Data":"9c6faf119250a0fc6667c032a4273148e1623e49de00fa33f537f71fecdcc121"} Mar 13 12:37:00.769443 master-0 kubenswrapper[4143]: I0313 12:37:00.769426 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" event={"ID":"d6226325-c4d9-497e-8d19-a71adc66c5ac","Type":"ContainerStarted","Data":"2867b637d5588b47ebe5cdedd060a822be9ded438afe6375fd744d51442200b2"} Mar 13 12:37:00.769443 master-0 kubenswrapper[4143]: I0313 12:37:00.769441 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" event={"ID":"d6226325-c4d9-497e-8d19-a71adc66c5ac","Type":"ContainerStarted","Data":"c66a511307b3a222f839ab221b16b497a0ac5afef3b88b66914f749688185c5e"} Mar 13 12:37:00.769443 master-0 kubenswrapper[4143]: I0313 12:37:00.769454 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" event={"ID":"d6226325-c4d9-497e-8d19-a71adc66c5ac","Type":"ContainerStarted","Data":"d9b4443af2ab62dcb5b12563ebee43c3724f06b35b902ddee2c47a5ab6fadc45"} Mar 13 12:37:00.769443 master-0 kubenswrapper[4143]: I0313 12:37:00.769468 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" event={"ID":"d6226325-c4d9-497e-8d19-a71adc66c5ac","Type":"ContainerStarted","Data":"d5cd72bf8a83fc1e65f1bae10d0d6ae3518240a330d0441e7e142b2e7369ecd2"} Mar 13 12:37:02.081425 master-0 kubenswrapper[4143]: I0313 12:37:02.081326 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:02.082312 master-0 kubenswrapper[4143]: I0313 12:37:02.081339 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:02.082312 master-0 kubenswrapper[4143]: E0313 12:37:02.081571 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:37:02.082312 master-0 kubenswrapper[4143]: E0313 12:37:02.081643 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:37:02.784265 master-0 kubenswrapper[4143]: I0313 12:37:02.784130 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" event={"ID":"d6226325-c4d9-497e-8d19-a71adc66c5ac","Type":"ContainerStarted","Data":"ad9260e847aeb777f2bb4870b58a70c7b6812fa820278c72e2b625161943ef45"} Mar 13 12:37:03.851217 master-0 kubenswrapper[4143]: I0313 12:37:03.851104 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:03.852367 master-0 kubenswrapper[4143]: E0313 12:37:03.851893 4143 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:03.852367 master-0 kubenswrapper[4143]: E0313 12:37:03.852003 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert podName:f39d7f76-0075-44c3-9101-eb2607cb176a nodeName:}" failed. No retries permitted until 2026-03-13 12:38:07.851974498 +0000 UTC m=+193.599118832 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert") pod "cluster-version-operator-745944c6b7-mbjxt" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:04.081729 master-0 kubenswrapper[4143]: I0313 12:37:04.081657 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:04.081957 master-0 kubenswrapper[4143]: E0313 12:37:04.081855 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:37:04.081957 master-0 kubenswrapper[4143]: I0313 12:37:04.081658 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:04.082081 master-0 kubenswrapper[4143]: E0313 12:37:04.082038 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:37:04.154066 master-0 kubenswrapper[4143]: I0313 12:37:04.153929 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btf8q\" (UniqueName: \"kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q\") pod \"network-check-target-pnwsc\" (UID: \"269aedfd-4274-4998-bd0d-603b67257666\") " pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:04.154243 master-0 kubenswrapper[4143]: E0313 12:37:04.154187 4143 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 12:37:04.154243 master-0 kubenswrapper[4143]: E0313 12:37:04.154222 4143 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 12:37:04.154306 master-0 kubenswrapper[4143]: E0313 12:37:04.154241 4143 projected.go:194] Error preparing data for projected volume kube-api-access-btf8q for pod openshift-network-diagnostics/network-check-target-pnwsc: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:37:04.154360 master-0 kubenswrapper[4143]: E0313 12:37:04.154333 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q podName:269aedfd-4274-4998-bd0d-603b67257666 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:36.154300907 +0000 UTC m=+161.901445271 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-btf8q" (UniqueName: "kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q") pod "network-check-target-pnwsc" (UID: "269aedfd-4274-4998-bd0d-603b67257666") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:37:04.801240 master-0 kubenswrapper[4143]: I0313 12:37:04.797092 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" event={"ID":"d6226325-c4d9-497e-8d19-a71adc66c5ac","Type":"ContainerStarted","Data":"9ae8f2354c6812a6552c8685081d1420b4b9a6d3369dd1efc2f87dad463f2ee0"} Mar 13 12:37:04.801240 master-0 kubenswrapper[4143]: I0313 12:37:04.797528 4143 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:04.801240 master-0 kubenswrapper[4143]: I0313 12:37:04.797551 4143 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:04.801240 master-0 kubenswrapper[4143]: I0313 12:37:04.797559 4143 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:04.877557 master-0 kubenswrapper[4143]: I0313 12:37:04.877468 4143 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:04.878362 master-0 kubenswrapper[4143]: I0313 12:37:04.878322 4143 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:04.930953 master-0 kubenswrapper[4143]: I0313 12:37:04.930863 4143 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" podStartSLOduration=6.930846244 podStartE2EDuration="6.930846244s" podCreationTimestamp="2026-03-13 12:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:37:04.930050891 +0000 UTC m=+130.677195235" watchObservedRunningTime="2026-03-13 12:37:04.930846244 +0000 UTC m=+130.677990568" Mar 13 12:37:05.073517 master-0 kubenswrapper[4143]: E0313 12:37:05.073364 4143 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 12:37:06.082213 master-0 kubenswrapper[4143]: I0313 12:37:06.082129 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:06.083173 master-0 kubenswrapper[4143]: I0313 12:37:06.082288 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:06.083173 master-0 kubenswrapper[4143]: E0313 12:37:06.082315 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:37:06.083173 master-0 kubenswrapper[4143]: E0313 12:37:06.082507 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:37:06.233174 master-0 kubenswrapper[4143]: I0313 12:37:06.233053 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-r9lmb"] Mar 13 12:37:06.234053 master-0 kubenswrapper[4143]: I0313 12:37:06.233984 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-pnwsc"] Mar 13 12:37:06.802419 master-0 kubenswrapper[4143]: I0313 12:37:06.802338 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:06.802419 master-0 kubenswrapper[4143]: I0313 12:37:06.802386 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:06.802929 master-0 kubenswrapper[4143]: E0313 12:37:06.802483 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:37:06.803067 master-0 kubenswrapper[4143]: E0313 12:37:06.803019 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:37:08.081830 master-0 kubenswrapper[4143]: I0313 12:37:08.081761 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:08.082531 master-0 kubenswrapper[4143]: I0313 12:37:08.081777 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:08.082531 master-0 kubenswrapper[4143]: E0313 12:37:08.081913 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:37:08.082531 master-0 kubenswrapper[4143]: E0313 12:37:08.081983 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:37:10.074994 master-0 kubenswrapper[4143]: E0313 12:37:10.074640 4143 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 12:37:10.082045 master-0 kubenswrapper[4143]: I0313 12:37:10.081994 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:10.082225 master-0 kubenswrapper[4143]: I0313 12:37:10.082054 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:10.082225 master-0 kubenswrapper[4143]: E0313 12:37:10.082168 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:37:10.082315 master-0 kubenswrapper[4143]: E0313 12:37:10.082255 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:37:12.081600 master-0 kubenswrapper[4143]: I0313 12:37:12.081490 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:12.081600 master-0 kubenswrapper[4143]: I0313 12:37:12.081545 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:12.082668 master-0 kubenswrapper[4143]: E0313 12:37:12.081643 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:37:12.082668 master-0 kubenswrapper[4143]: E0313 12:37:12.081781 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:37:14.082131 master-0 kubenswrapper[4143]: I0313 12:37:14.082008 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:14.084117 master-0 kubenswrapper[4143]: I0313 12:37:14.082173 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:14.084117 master-0 kubenswrapper[4143]: E0313 12:37:14.082188 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r9lmb" podUID="29b6aa89-0416-4595-9deb-10b290521d86" Mar 13 12:37:14.084117 master-0 kubenswrapper[4143]: E0313 12:37:14.082376 4143 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pnwsc" podUID="269aedfd-4274-4998-bd0d-603b67257666" Mar 13 12:37:16.082151 master-0 kubenswrapper[4143]: I0313 12:37:16.082073 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:16.082675 master-0 kubenswrapper[4143]: I0313 12:37:16.082245 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:16.084645 master-0 kubenswrapper[4143]: I0313 12:37:16.084622 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 12:37:16.084730 master-0 kubenswrapper[4143]: I0313 12:37:16.084623 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 12:37:16.084730 master-0 kubenswrapper[4143]: I0313 12:37:16.084622 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 12:37:20.946306 master-0 kubenswrapper[4143]: I0313 12:37:20.946239 4143 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 13 12:37:20.989917 master-0 kubenswrapper[4143]: I0313 12:37:20.989825 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms"] Mar 13 12:37:20.990584 master-0 kubenswrapper[4143]: I0313 12:37:20.990541 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:20.991559 master-0 kubenswrapper[4143]: I0313 12:37:20.991505 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4"] Mar 13 12:37:20.994755 master-0 kubenswrapper[4143]: I0313 12:37:20.992106 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:20.994755 master-0 kubenswrapper[4143]: I0313 12:37:20.994373 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 12:37:20.994755 master-0 kubenswrapper[4143]: I0313 12:37:20.994753 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 12:37:20.995447 master-0 kubenswrapper[4143]: I0313 12:37:20.995042 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 12:37:20.995447 master-0 kubenswrapper[4143]: I0313 12:37:20.995370 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 12:37:20.995630 master-0 kubenswrapper[4143]: I0313 12:37:20.995521 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 12:37:20.999789 master-0 kubenswrapper[4143]: I0313 12:37:20.996598 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 12:37:20.999789 master-0 kubenswrapper[4143]: W0313 12:37:20.996817 4143 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'master-0' and this object Mar 13 12:37:20.999789 master-0 kubenswrapper[4143]: E0313 12:37:20.996868 4143 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 12:37:20.999789 master-0 kubenswrapper[4143]: I0313 12:37:20.997082 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 12:37:20.999789 master-0 kubenswrapper[4143]: I0313 12:37:20.997293 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 12:37:20.999789 master-0 kubenswrapper[4143]: I0313 12:37:20.997422 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 12:37:20.999789 master-0 kubenswrapper[4143]: I0313 12:37:20.999714 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd"] Mar 13 12:37:21.000546 master-0 kubenswrapper[4143]: I0313 12:37:21.000000 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj"] Mar 13 12:37:21.000546 master-0 kubenswrapper[4143]: I0313 12:37:21.000253 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:21.000725 master-0 kubenswrapper[4143]: I0313 12:37:21.000626 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd" Mar 13 12:37:21.003421 master-0 kubenswrapper[4143]: I0313 12:37:21.003298 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-mmwk7"] Mar 13 12:37:21.003906 master-0 kubenswrapper[4143]: I0313 12:37:21.003860 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g"] Mar 13 12:37:21.003906 master-0 kubenswrapper[4143]: I0313 12:37:21.003884 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 12:37:21.004345 master-0 kubenswrapper[4143]: I0313 12:37:21.004300 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:21.004503 master-0 kubenswrapper[4143]: I0313 12:37:21.004418 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 12:37:21.004503 master-0 kubenswrapper[4143]: I0313 12:37:21.004471 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:21.004953 master-0 kubenswrapper[4143]: I0313 12:37:21.004866 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk"] Mar 13 12:37:21.005577 master-0 kubenswrapper[4143]: I0313 12:37:21.005526 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:21.007415 master-0 kubenswrapper[4143]: I0313 12:37:21.006526 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 12:37:21.007415 master-0 kubenswrapper[4143]: I0313 12:37:21.006809 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 12:37:21.007415 master-0 kubenswrapper[4143]: I0313 12:37:21.007026 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf"] Mar 13 12:37:21.007741 master-0 kubenswrapper[4143]: I0313 12:37:21.007538 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd"] Mar 13 12:37:21.007741 master-0 kubenswrapper[4143]: I0313 12:37:21.007604 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:21.008204 master-0 kubenswrapper[4143]: I0313 12:37:21.008126 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:21.008349 master-0 kubenswrapper[4143]: I0313 12:37:21.008159 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 12:37:21.008543 master-0 kubenswrapper[4143]: I0313 12:37:21.008497 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4"] Mar 13 12:37:21.009063 master-0 kubenswrapper[4143]: I0313 12:37:21.009015 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:21.009246 master-0 kubenswrapper[4143]: I0313 12:37:21.009209 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-ckl2j"] Mar 13 12:37:21.013190 master-0 kubenswrapper[4143]: I0313 12:37:21.009637 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:21.013190 master-0 kubenswrapper[4143]: I0313 12:37:21.010791 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n"] Mar 13 12:37:21.013190 master-0 kubenswrapper[4143]: I0313 12:37:21.011226 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:21.013190 master-0 kubenswrapper[4143]: I0313 12:37:21.011766 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz"] Mar 13 12:37:21.013190 master-0 kubenswrapper[4143]: I0313 12:37:21.012067 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:21.014227 master-0 kubenswrapper[4143]: I0313 12:37:21.013746 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj"] Mar 13 12:37:21.014357 master-0 kubenswrapper[4143]: I0313 12:37:21.014294 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:21.018423 master-0 kubenswrapper[4143]: W0313 12:37:21.018355 4143 reflector.go:561] object-"openshift-image-registry"/"image-registry-operator-tls": failed to list *v1.Secret: secrets "image-registry-operator-tls" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'master-0' and this object Mar 13 12:37:21.018423 master-0 kubenswrapper[4143]: E0313 12:37:21.018419 4143 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"image-registry-operator-tls\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 12:37:21.018912 master-0 kubenswrapper[4143]: I0313 12:37:21.018876 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 12:37:21.019058 master-0 kubenswrapper[4143]: W0313 12:37:21.019039 4143 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'master-0' and this object Mar 13 12:37:21.019207 master-0 kubenswrapper[4143]: E0313 12:37:21.019058 4143 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 12:37:21.019207 master-0 kubenswrapper[4143]: I0313 12:37:21.019089 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 12:37:21.019207 master-0 kubenswrapper[4143]: I0313 12:37:21.019150 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:37:21.019400 master-0 kubenswrapper[4143]: I0313 12:37:21.019268 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 12:37:21.019400 master-0 kubenswrapper[4143]: I0313 12:37:21.019352 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 12:37:21.019524 master-0 kubenswrapper[4143]: W0313 12:37:21.019426 4143 reflector.go:561] object-"openshift-ingress-operator"/"metrics-tls": failed to list *v1.Secret: secrets "metrics-tls" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-ingress-operator": no relationship found between node 'master-0' and this object Mar 13 12:37:21.019524 master-0 kubenswrapper[4143]: E0313 12:37:21.019457 4143 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metrics-tls\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ingress-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 12:37:21.019524 master-0 kubenswrapper[4143]: W0313 12:37:21.019472 4143 reflector.go:561] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-service-ca-operator": no relationship found between node 'master-0' and this object Mar 13 12:37:21.019524 master-0 kubenswrapper[4143]: E0313 12:37:21.019487 4143 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-service-ca-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 12:37:21.019524 master-0 kubenswrapper[4143]: W0313 12:37:21.019525 4143 reflector.go:561] object-"openshift-image-registry"/"trusted-ca": failed to list *v1.ConfigMap: configmaps "trusted-ca" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'master-0' and this object Mar 13 12:37:21.019825 master-0 kubenswrapper[4143]: E0313 12:37:21.019535 4143 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 12:37:21.019825 master-0 kubenswrapper[4143]: W0313 12:37:21.019585 4143 reflector.go:561] object-"openshift-marketplace"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'master-0' and this object Mar 13 12:37:21.019825 master-0 kubenswrapper[4143]: E0313 12:37:21.019597 4143 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 12:37:21.019825 master-0 kubenswrapper[4143]: W0313 12:37:21.019586 4143 reflector.go:561] object-"openshift-marketplace"/"marketplace-trusted-ca": failed to list *v1.ConfigMap: configmaps "marketplace-trusted-ca" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'master-0' and this object Mar 13 12:37:21.019825 master-0 kubenswrapper[4143]: W0313 12:37:21.019620 4143 reflector.go:561] object-"openshift-service-ca-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-service-ca-operator": no relationship found between node 'master-0' and this object Mar 13 12:37:21.019825 master-0 kubenswrapper[4143]: W0313 12:37:21.019647 4143 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'master-0' and this object Mar 13 12:37:21.019825 master-0 kubenswrapper[4143]: E0313 12:37:21.019646 4143 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-service-ca-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 12:37:21.019825 master-0 kubenswrapper[4143]: E0313 12:37:21.019657 4143 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 12:37:21.019825 master-0 kubenswrapper[4143]: W0313 12:37:21.019683 4143 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-metrics": failed to list *v1.Secret: secrets "marketplace-operator-metrics" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'master-0' and this object Mar 13 12:37:21.019825 master-0 kubenswrapper[4143]: E0313 12:37:21.019693 4143 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"marketplace-operator-metrics\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 12:37:21.019825 master-0 kubenswrapper[4143]: I0313 12:37:21.019732 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 12:37:21.019825 master-0 kubenswrapper[4143]: I0313 12:37:21.019760 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: I0313 12:37:21.019983 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: I0313 12:37:21.020114 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: W0313 12:37:21.020171 4143 reflector.go:561] object-"openshift-ingress-operator"/"trusted-ca": failed to list *v1.ConfigMap: configmaps "trusted-ca" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-ingress-operator": no relationship found between node 'master-0' and this object Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: E0313 12:37:21.020186 4143 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ingress-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: I0313 12:37:21.020218 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: W0313 12:37:21.020280 4143 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-config": failed to list *v1.ConfigMap: configmaps "service-ca-operator-config" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-service-ca-operator": no relationship found between node 'master-0' and this object Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: I0313 12:37:21.020316 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: E0313 12:37:21.020327 4143 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca-operator-config\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-service-ca-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: E0313 12:37:21.019614 4143 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"marketplace-trusted-ca\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: I0313 12:37:21.020340 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: I0313 12:37:21.019095 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: W0313 12:37:21.020399 4143 reflector.go:561] object-"openshift-ingress-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-ingress-operator": no relationship found between node 'master-0' and this object Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: E0313 12:37:21.020417 4143 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ingress-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: I0313 12:37:21.020426 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: I0313 12:37:21.020457 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: I0313 12:37:21.020123 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 12:37:21.020535 master-0 kubenswrapper[4143]: I0313 12:37:21.020526 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:37:21.021501 master-0 kubenswrapper[4143]: I0313 12:37:21.020581 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9"] Mar 13 12:37:21.021501 master-0 kubenswrapper[4143]: I0313 12:37:21.020652 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:37:21.021501 master-0 kubenswrapper[4143]: I0313 12:37:21.020752 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 12:37:21.021501 master-0 kubenswrapper[4143]: W0313 12:37:21.020766 4143 reflector.go:561] object-"openshift-service-ca-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-service-ca-operator": no relationship found between node 'master-0' and this object Mar 13 12:37:21.021501 master-0 kubenswrapper[4143]: E0313 12:37:21.020784 4143 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-service-ca-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 12:37:21.021501 master-0 kubenswrapper[4143]: I0313 12:37:21.020403 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 12:37:21.021501 master-0 kubenswrapper[4143]: I0313 12:37:21.020435 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 12:37:21.021501 master-0 kubenswrapper[4143]: I0313 12:37:21.020873 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 12:37:21.021501 master-0 kubenswrapper[4143]: I0313 12:37:21.020972 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:21.022796 master-0 kubenswrapper[4143]: I0313 12:37:21.022274 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht"] Mar 13 12:37:21.022796 master-0 kubenswrapper[4143]: I0313 12:37:21.022666 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.022997 master-0 kubenswrapper[4143]: I0313 12:37:21.022969 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-96gds"] Mar 13 12:37:21.023769 master-0 kubenswrapper[4143]: I0313 12:37:21.023741 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:21.024286 master-0 kubenswrapper[4143]: I0313 12:37:21.024264 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz"] Mar 13 12:37:21.024796 master-0 kubenswrapper[4143]: I0313 12:37:21.024770 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:21.029916 master-0 kubenswrapper[4143]: I0313 12:37:21.026043 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc"] Mar 13 12:37:21.029916 master-0 kubenswrapper[4143]: I0313 12:37:21.026574 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:21.029916 master-0 kubenswrapper[4143]: I0313 12:37:21.028372 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82"] Mar 13 12:37:21.030338 master-0 kubenswrapper[4143]: I0313 12:37:21.029987 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:21.035170 master-0 kubenswrapper[4143]: I0313 12:37:21.031498 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk"] Mar 13 12:37:21.035170 master-0 kubenswrapper[4143]: I0313 12:37:21.032441 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn"] Mar 13 12:37:21.035170 master-0 kubenswrapper[4143]: I0313 12:37:21.033066 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:21.035170 master-0 kubenswrapper[4143]: I0313 12:37:21.032449 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:21.046530 master-0 kubenswrapper[4143]: I0313 12:37:21.046489 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms"] Mar 13 12:37:21.046530 master-0 kubenswrapper[4143]: I0313 12:37:21.046529 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4"] Mar 13 12:37:21.074170 master-0 kubenswrapper[4143]: I0313 12:37:21.074052 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 12:37:21.074415 master-0 kubenswrapper[4143]: I0313 12:37:21.074395 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 12:37:21.074614 master-0 kubenswrapper[4143]: I0313 12:37:21.074576 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 12:37:21.074761 master-0 kubenswrapper[4143]: I0313 12:37:21.074285 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 12:37:21.075056 master-0 kubenswrapper[4143]: I0313 12:37:21.075041 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 12:37:21.082186 master-0 kubenswrapper[4143]: I0313 12:37:21.074333 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 12:37:21.082186 master-0 kubenswrapper[4143]: I0313 12:37:21.074361 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 12:37:21.082186 master-0 kubenswrapper[4143]: I0313 12:37:21.077061 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g"] Mar 13 12:37:21.083819 master-0 kubenswrapper[4143]: I0313 12:37:21.083773 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 12:37:21.083944 master-0 kubenswrapper[4143]: I0313 12:37:21.083923 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 12:37:21.098160 master-0 kubenswrapper[4143]: I0313 12:37:21.097720 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 12:37:21.098517 master-0 kubenswrapper[4143]: I0313 12:37:21.098484 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 12:37:21.104831 master-0 kubenswrapper[4143]: I0313 12:37:21.104778 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 12:37:21.105015 master-0 kubenswrapper[4143]: I0313 12:37:21.104991 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 12:37:21.105316 master-0 kubenswrapper[4143]: I0313 12:37:21.105279 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 12:37:21.105384 master-0 kubenswrapper[4143]: I0313 12:37:21.105344 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 12:37:21.105502 master-0 kubenswrapper[4143]: I0313 12:37:21.105051 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 12:37:21.105810 master-0 kubenswrapper[4143]: I0313 12:37:21.105764 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 12:37:21.105894 master-0 kubenswrapper[4143]: I0313 12:37:21.105875 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 12:37:21.105971 master-0 kubenswrapper[4143]: I0313 12:37:21.104803 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 12:37:21.106165 master-0 kubenswrapper[4143]: I0313 12:37:21.106087 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 12:37:21.106220 master-0 kubenswrapper[4143]: I0313 12:37:21.106208 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 12:37:21.106272 master-0 kubenswrapper[4143]: I0313 12:37:21.106088 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 12:37:21.106378 master-0 kubenswrapper[4143]: I0313 12:37:21.106361 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 12:37:21.113210 master-0 kubenswrapper[4143]: I0313 12:37:21.109838 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 12:37:21.114149 master-0 kubenswrapper[4143]: I0313 12:37:21.114083 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-mmwk7"] Mar 13 12:37:21.114264 master-0 kubenswrapper[4143]: I0313 12:37:21.114251 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd"] Mar 13 12:37:21.114338 master-0 kubenswrapper[4143]: I0313 12:37:21.114326 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn"] Mar 13 12:37:21.114415 master-0 kubenswrapper[4143]: I0313 12:37:21.114404 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj"] Mar 13 12:37:21.114489 master-0 kubenswrapper[4143]: I0313 12:37:21.114477 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz"] Mar 13 12:37:21.115859 master-0 kubenswrapper[4143]: I0313 12:37:21.115826 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk"] Mar 13 12:37:21.115949 master-0 kubenswrapper[4143]: I0313 12:37:21.115867 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82"] Mar 13 12:37:21.117864 master-0 kubenswrapper[4143]: I0313 12:37:21.116729 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 12:37:21.123044 master-0 kubenswrapper[4143]: I0313 12:37:21.122952 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-96gds"] Mar 13 12:37:21.127510 master-0 kubenswrapper[4143]: I0313 12:37:21.127465 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-ckl2j"] Mar 13 12:37:21.128524 master-0 kubenswrapper[4143]: I0313 12:37:21.128484 4143 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-qz6pg"] Mar 13 12:37:21.130745 master-0 kubenswrapper[4143]: I0313 12:37:21.130641 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf"] Mar 13 12:37:21.130745 master-0 kubenswrapper[4143]: I0313 12:37:21.130682 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk"] Mar 13 12:37:21.130856 master-0 kubenswrapper[4143]: I0313 12:37:21.130772 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:21.131030 master-0 kubenswrapper[4143]: I0313 12:37:21.131005 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clrz7\" (UniqueName: \"kubernetes.io/projected/15b592d6-3c48-45d4-9172-d28632ae8995-kube-api-access-clrz7\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.131088 master-0 kubenswrapper[4143]: I0313 12:37:21.131031 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:21.131088 master-0 kubenswrapper[4143]: I0313 12:37:21.131050 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/887d261f-d07f-4ef0-a230-6568f47acf4d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:21.131088 master-0 kubenswrapper[4143]: I0313 12:37:21.131066 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-client\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.131088 master-0 kubenswrapper[4143]: I0313 12:37:21.131082 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2p67\" (UniqueName: \"kubernetes.io/projected/13f32761-b386-4f93-b3c0-b16ea53d338a-kube-api-access-m2p67\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:21.131355 master-0 kubenswrapper[4143]: I0313 12:37:21.131100 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034aaf8e-95df-4171-bae4-e7abe58d15f7-config\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:21.131355 master-0 kubenswrapper[4143]: I0313 12:37:21.131158 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4tnq\" (UniqueName: \"kubernetes.io/projected/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-kube-api-access-m4tnq\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.131355 master-0 kubenswrapper[4143]: I0313 12:37:21.131193 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-serving-cert\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.131355 master-0 kubenswrapper[4143]: I0313 12:37:21.131215 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:21.131355 master-0 kubenswrapper[4143]: I0313 12:37:21.131251 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034aaf8e-95df-4171-bae4-e7abe58d15f7-serving-cert\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:21.131355 master-0 kubenswrapper[4143]: I0313 12:37:21.131278 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.131355 master-0 kubenswrapper[4143]: I0313 12:37:21.131299 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3020d236-03e0-4916-97dd-f1085632ca43-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:21.131355 master-0 kubenswrapper[4143]: I0313 12:37:21.131309 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj"] Mar 13 12:37:21.131355 master-0 kubenswrapper[4143]: I0313 12:37:21.131322 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c62b15f-001a-4b64-b85f-348aefde5d1b-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:21.131691 master-0 kubenswrapper[4143]: I0313 12:37:21.131368 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-serving-cert\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:21.131691 master-0 kubenswrapper[4143]: I0313 12:37:21.131406 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-ca\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.131691 master-0 kubenswrapper[4143]: I0313 12:37:21.131430 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0803181-4e37-43fa-8ddc-9c76d3f61817-serving-cert\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:21.131691 master-0 kubenswrapper[4143]: I0313 12:37:21.131453 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:21.131691 master-0 kubenswrapper[4143]: I0313 12:37:21.131472 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:21.131691 master-0 kubenswrapper[4143]: I0313 12:37:21.131489 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:21.131691 master-0 kubenswrapper[4143]: I0313 12:37:21.131505 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5775266-5e58-44ed-81cb-dfe3faf38add-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:21.131691 master-0 kubenswrapper[4143]: I0313 12:37:21.131521 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.131691 master-0 kubenswrapper[4143]: I0313 12:37:21.131540 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9psfn\" (UniqueName: \"kubernetes.io/projected/4c0b18db-06ad-4d58-a353-f6fd96309dea-kube-api-access-9psfn\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:21.131691 master-0 kubenswrapper[4143]: I0313 12:37:21.131559 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:21.131691 master-0 kubenswrapper[4143]: I0313 12:37:21.131610 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcf05594-4c10-4b54-a47c-d55e323f1f87-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:21.131691 master-0 kubenswrapper[4143]: I0313 12:37:21.131689 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w5r2\" (UniqueName: \"kubernetes.io/projected/034aaf8e-95df-4171-bae4-e7abe58d15f7-kube-api-access-5w5r2\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:21.132166 master-0 kubenswrapper[4143]: I0313 12:37:21.131739 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-config\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.132166 master-0 kubenswrapper[4143]: I0313 12:37:21.131765 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da84bb7-e936-49a0-96b5-614a1305d6a4-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:21.132166 master-0 kubenswrapper[4143]: I0313 12:37:21.131786 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:21.132166 master-0 kubenswrapper[4143]: I0313 12:37:21.131815 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c24hd\" (UniqueName: \"kubernetes.io/projected/3020d236-03e0-4916-97dd-f1085632ca43-kube-api-access-c24hd\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:21.132166 master-0 kubenswrapper[4143]: I0313 12:37:21.131842 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbk4f\" (UniqueName: \"kubernetes.io/projected/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-kube-api-access-zbk4f\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:21.132166 master-0 kubenswrapper[4143]: I0313 12:37:21.131863 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/887d261f-d07f-4ef0-a230-6568f47acf4d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:21.132166 master-0 kubenswrapper[4143]: I0313 12:37:21.131884 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0803181-4e37-43fa-8ddc-9c76d3f61817-available-featuregates\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:21.132166 master-0 kubenswrapper[4143]: I0313 12:37:21.131905 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/604456a0-4997-43bc-87ef-283a002111fe-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:21.132166 master-0 kubenswrapper[4143]: I0313 12:37:21.131936 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:21.132166 master-0 kubenswrapper[4143]: I0313 12:37:21.131962 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:21.132166 master-0 kubenswrapper[4143]: I0313 12:37:21.131984 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwjz5\" (UniqueName: \"kubernetes.io/projected/4e279dcc-35e2-4503-babc-978ac208c150-kube-api-access-bwjz5\") pod \"csi-snapshot-controller-operator-5685fbc7d-97wkd\" (UID: \"4e279dcc-35e2-4503-babc-978ac208c150\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd" Mar 13 12:37:21.132166 master-0 kubenswrapper[4143]: I0313 12:37:21.132010 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.132166 master-0 kubenswrapper[4143]: I0313 12:37:21.132056 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-config\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:21.132166 master-0 kubenswrapper[4143]: I0313 12:37:21.132081 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sk7j\" (UniqueName: \"kubernetes.io/projected/604456a0-4997-43bc-87ef-283a002111fe-kube-api-access-8sk7j\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:21.132641 master-0 kubenswrapper[4143]: I0313 12:37:21.132106 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c62b15f-001a-4b64-b85f-348aefde5d1b-config\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:21.132641 master-0 kubenswrapper[4143]: I0313 12:37:21.132130 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5775266-5e58-44ed-81cb-dfe3faf38add-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:21.132641 master-0 kubenswrapper[4143]: I0313 12:37:21.132176 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x8kz\" (UniqueName: \"kubernetes.io/projected/3d653e1a-5903-4a02-9357-df145f028c0d-kube-api-access-6x8kz\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:21.132641 master-0 kubenswrapper[4143]: I0313 12:37:21.132241 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:21.132641 master-0 kubenswrapper[4143]: I0313 12:37:21.132282 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc"] Mar 13 12:37:21.132641 master-0 kubenswrapper[4143]: I0313 12:37:21.132343 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg8tz\" (UniqueName: \"kubernetes.io/projected/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-kube-api-access-vg8tz\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:21.132641 master-0 kubenswrapper[4143]: I0313 12:37:21.132407 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:21.132641 master-0 kubenswrapper[4143]: I0313 12:37:21.132467 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0da84bb7-e936-49a0-96b5-614a1305d6a4-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:21.132641 master-0 kubenswrapper[4143]: I0313 12:37:21.132512 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:21.132641 master-0 kubenswrapper[4143]: I0313 12:37:21.132559 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-config\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:21.132641 master-0 kubenswrapper[4143]: I0313 12:37:21.132602 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:21.132641 master-0 kubenswrapper[4143]: I0313 12:37:21.132623 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q2qc\" (UniqueName: \"kubernetes.io/projected/f5775266-5e58-44ed-81cb-dfe3faf38add-kube-api-access-9q2qc\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:21.133264 master-0 kubenswrapper[4143]: I0313 12:37:21.132654 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5nb7\" (UniqueName: \"kubernetes.io/projected/d3d998ee-b26f-4e30-83bc-f94f8c68060a-kube-api-access-x5nb7\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:21.133264 master-0 kubenswrapper[4143]: I0313 12:37:21.132692 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmfxj\" (UniqueName: \"kubernetes.io/projected/887d261f-d07f-4ef0-a230-6568f47acf4d-kube-api-access-pmfxj\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:21.133264 master-0 kubenswrapper[4143]: I0313 12:37:21.132730 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-bound-sa-token\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:21.133264 master-0 kubenswrapper[4143]: I0313 12:37:21.132773 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:21.133264 master-0 kubenswrapper[4143]: I0313 12:37:21.132827 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4hd6\" (UniqueName: \"kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-kube-api-access-j4hd6\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:21.133264 master-0 kubenswrapper[4143]: I0313 12:37:21.132855 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8hcd\" (UniqueName: \"kubernetes.io/projected/d5a19b80-d488-46d3-a4a8-0b80361077e1-kube-api-access-p8hcd\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:21.133264 master-0 kubenswrapper[4143]: I0313 12:37:21.132888 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9hks\" (UniqueName: \"kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-kube-api-access-f9hks\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:21.133264 master-0 kubenswrapper[4143]: I0313 12:37:21.132903 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd"] Mar 13 12:37:21.133264 master-0 kubenswrapper[4143]: I0313 12:37:21.132913 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwkdj\" (UniqueName: \"kubernetes.io/projected/f0803181-4e37-43fa-8ddc-9c76d3f61817-kube-api-access-lwkdj\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:21.133264 master-0 kubenswrapper[4143]: I0313 12:37:21.132936 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:21.133264 master-0 kubenswrapper[4143]: I0313 12:37:21.132975 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:21.133264 master-0 kubenswrapper[4143]: I0313 12:37:21.133231 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:21.133264 master-0 kubenswrapper[4143]: I0313 12:37:21.133254 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:21.133264 master-0 kubenswrapper[4143]: I0313 12:37:21.133273 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-config\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.133787 master-0 kubenswrapper[4143]: I0313 12:37:21.133321 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-serving-cert\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.133787 master-0 kubenswrapper[4143]: I0313 12:37:21.133352 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0da84bb7-e936-49a0-96b5-614a1305d6a4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:21.133787 master-0 kubenswrapper[4143]: I0313 12:37:21.133374 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f79578c-bbfb-4968-893a-730deb4c01f9-trusted-ca\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:21.133787 master-0 kubenswrapper[4143]: I0313 12:37:21.133409 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cf2v\" (UniqueName: \"kubernetes.io/projected/8c62b15f-001a-4b64-b85f-348aefde5d1b-kube-api-access-8cf2v\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:21.134021 master-0 kubenswrapper[4143]: I0313 12:37:21.133819 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht"] Mar 13 12:37:21.137733 master-0 kubenswrapper[4143]: I0313 12:37:21.137005 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 12:37:21.149453 master-0 kubenswrapper[4143]: I0313 12:37:21.149403 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4"] Mar 13 12:37:21.151400 master-0 kubenswrapper[4143]: I0313 12:37:21.151040 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz"] Mar 13 12:37:21.151700 master-0 kubenswrapper[4143]: I0313 12:37:21.151676 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9"] Mar 13 12:37:21.168322 master-0 kubenswrapper[4143]: I0313 12:37:21.168294 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n"] Mar 13 12:37:21.233971 master-0 kubenswrapper[4143]: I0313 12:37:21.233833 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5775266-5e58-44ed-81cb-dfe3faf38add-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:21.233971 master-0 kubenswrapper[4143]: I0313 12:37:21.233871 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x8kz\" (UniqueName: \"kubernetes.io/projected/3d653e1a-5903-4a02-9357-df145f028c0d-kube-api-access-6x8kz\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:21.233971 master-0 kubenswrapper[4143]: I0313 12:37:21.233896 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:21.236808 master-0 kubenswrapper[4143]: I0313 12:37:21.234197 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg8tz\" (UniqueName: \"kubernetes.io/projected/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-kube-api-access-vg8tz\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:21.236808 master-0 kubenswrapper[4143]: I0313 12:37:21.234290 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:21.236808 master-0 kubenswrapper[4143]: I0313 12:37:21.234314 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0da84bb7-e936-49a0-96b5-614a1305d6a4-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:21.236808 master-0 kubenswrapper[4143]: I0313 12:37:21.234362 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:21.236808 master-0 kubenswrapper[4143]: I0313 12:37:21.234381 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-config\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:21.236808 master-0 kubenswrapper[4143]: I0313 12:37:21.234918 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5775266-5e58-44ed-81cb-dfe3faf38add-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:21.236808 master-0 kubenswrapper[4143]: E0313 12:37:21.235443 4143 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:21.236808 master-0 kubenswrapper[4143]: I0313 12:37:21.235477 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:21.236808 master-0 kubenswrapper[4143]: E0313 12:37:21.235518 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls podName:604456a0-4997-43bc-87ef-283a002111fe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:21.735493308 +0000 UTC m=+147.482637642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-zwtdz" (UID: "604456a0-4997-43bc-87ef-283a002111fe") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:21.236808 master-0 kubenswrapper[4143]: I0313 12:37:21.235573 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q2qc\" (UniqueName: \"kubernetes.io/projected/f5775266-5e58-44ed-81cb-dfe3faf38add-kube-api-access-9q2qc\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:21.236808 master-0 kubenswrapper[4143]: I0313 12:37:21.235604 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5nb7\" (UniqueName: \"kubernetes.io/projected/d3d998ee-b26f-4e30-83bc-f94f8c68060a-kube-api-access-x5nb7\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:21.236808 master-0 kubenswrapper[4143]: I0313 12:37:21.235655 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmfxj\" (UniqueName: \"kubernetes.io/projected/887d261f-d07f-4ef0-a230-6568f47acf4d-kube-api-access-pmfxj\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:21.236808 master-0 kubenswrapper[4143]: I0313 12:37:21.235686 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:21.236808 master-0 kubenswrapper[4143]: I0313 12:37:21.235715 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-bound-sa-token\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:21.236808 master-0 kubenswrapper[4143]: I0313 12:37:21.235747 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4hd6\" (UniqueName: \"kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-kube-api-access-j4hd6\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:21.240560 master-0 kubenswrapper[4143]: E0313 12:37:21.237462 4143 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:37:21.240560 master-0 kubenswrapper[4143]: E0313 12:37:21.237501 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert podName:10944f9c-8ce9-44e6-9c36-a0ea19d8cae3 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:21.737487906 +0000 UTC m=+147.484632230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert") pod "catalog-operator-7d9c49f57b-tlnkd" (UID: "10944f9c-8ce9-44e6-9c36-a0ea19d8cae3") : secret "catalog-operator-serving-cert" not found Mar 13 12:37:21.240560 master-0 kubenswrapper[4143]: I0313 12:37:21.239131 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8hcd\" (UniqueName: \"kubernetes.io/projected/d5a19b80-d488-46d3-a4a8-0b80361077e1-kube-api-access-p8hcd\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:21.240560 master-0 kubenswrapper[4143]: I0313 12:37:21.240284 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9hks\" (UniqueName: \"kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-kube-api-access-f9hks\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:21.240560 master-0 kubenswrapper[4143]: I0313 12:37:21.240320 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwkdj\" (UniqueName: \"kubernetes.io/projected/f0803181-4e37-43fa-8ddc-9c76d3f61817-kube-api-access-lwkdj\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:21.240560 master-0 kubenswrapper[4143]: I0313 12:37:21.240353 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:21.240560 master-0 kubenswrapper[4143]: I0313 12:37:21.240401 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xstz5\" (UniqueName: \"kubernetes.io/projected/08e2bc8e-ca80-454c-81dc-211d122e32e0-kube-api-access-xstz5\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:21.240560 master-0 kubenswrapper[4143]: I0313 12:37:21.240439 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:21.240560 master-0 kubenswrapper[4143]: I0313 12:37:21.240475 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:21.240560 master-0 kubenswrapper[4143]: I0313 12:37:21.240512 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-config\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.240560 master-0 kubenswrapper[4143]: I0313 12:37:21.240544 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:21.240841 master-0 kubenswrapper[4143]: I0313 12:37:21.240585 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cf2v\" (UniqueName: \"kubernetes.io/projected/8c62b15f-001a-4b64-b85f-348aefde5d1b-kube-api-access-8cf2v\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:21.240841 master-0 kubenswrapper[4143]: I0313 12:37:21.240623 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-serving-cert\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.240841 master-0 kubenswrapper[4143]: I0313 12:37:21.240654 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0da84bb7-e936-49a0-96b5-614a1305d6a4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:21.240841 master-0 kubenswrapper[4143]: I0313 12:37:21.240686 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f79578c-bbfb-4968-893a-730deb4c01f9-trusted-ca\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:21.240948 master-0 kubenswrapper[4143]: E0313 12:37:21.240907 4143 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:37:21.241203 master-0 kubenswrapper[4143]: E0313 12:37:21.241180 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:21.740998157 +0000 UTC m=+147.488142491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "node-tuning-operator-tls" not found Mar 13 12:37:21.241448 master-0 kubenswrapper[4143]: E0313 12:37:21.241412 4143 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:37:21.241516 master-0 kubenswrapper[4143]: E0313 12:37:21.241491 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert podName:3d653e1a-5903-4a02-9357-df145f028c0d nodeName:}" failed. No retries permitted until 2026-03-13 12:37:21.741470833 +0000 UTC m=+147.488615157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-669qk" (UID: "3d653e1a-5903-4a02-9357-df145f028c0d") : secret "package-server-manager-serving-cert" not found Mar 13 12:37:21.241729 master-0 kubenswrapper[4143]: I0313 12:37:21.241688 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-config\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.241780 master-0 kubenswrapper[4143]: I0313 12:37:21.240718 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:21.241844 master-0 kubenswrapper[4143]: I0313 12:37:21.241824 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/887d261f-d07f-4ef0-a230-6568f47acf4d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:21.241888 master-0 kubenswrapper[4143]: I0313 12:37:21.241863 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clrz7\" (UniqueName: \"kubernetes.io/projected/15b592d6-3c48-45d4-9172-d28632ae8995-kube-api-access-clrz7\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.241928 master-0 kubenswrapper[4143]: I0313 12:37:21.241898 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2p67\" (UniqueName: \"kubernetes.io/projected/13f32761-b386-4f93-b3c0-b16ea53d338a-kube-api-access-m2p67\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:21.241957 master-0 kubenswrapper[4143]: I0313 12:37:21.241933 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-client\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.243429 master-0 kubenswrapper[4143]: I0313 12:37:21.241964 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:21.243519 master-0 kubenswrapper[4143]: I0313 12:37:21.243491 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034aaf8e-95df-4171-bae4-e7abe58d15f7-config\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:21.244533 master-0 kubenswrapper[4143]: I0313 12:37:21.243540 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4tnq\" (UniqueName: \"kubernetes.io/projected/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-kube-api-access-m4tnq\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.244533 master-0 kubenswrapper[4143]: I0313 12:37:21.243573 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-serving-cert\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.244533 master-0 kubenswrapper[4143]: I0313 12:37:21.243619 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034aaf8e-95df-4171-bae4-e7abe58d15f7-serving-cert\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:21.244533 master-0 kubenswrapper[4143]: I0313 12:37:21.243656 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/08e2bc8e-ca80-454c-81dc-211d122e32e0-iptables-alerter-script\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:21.244533 master-0 kubenswrapper[4143]: I0313 12:37:21.243273 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-config\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:21.244533 master-0 kubenswrapper[4143]: I0313 12:37:21.243692 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.244533 master-0 kubenswrapper[4143]: I0313 12:37:21.243805 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-ca\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.244533 master-0 kubenswrapper[4143]: I0313 12:37:21.243902 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0803181-4e37-43fa-8ddc-9c76d3f61817-serving-cert\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:21.244533 master-0 kubenswrapper[4143]: I0313 12:37:21.243936 4143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/08e2bc8e-ca80-454c-81dc-211d122e32e0-host-slash\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:21.244533 master-0 kubenswrapper[4143]: I0313 12:37:21.243997 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:21.244533 master-0 kubenswrapper[4143]: I0313 12:37:21.244048 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3020d236-03e0-4916-97dd-f1085632ca43-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:21.244533 master-0 kubenswrapper[4143]: I0313 12:37:21.244105 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:21.244533 master-0 kubenswrapper[4143]: I0313 12:37:21.244128 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c62b15f-001a-4b64-b85f-348aefde5d1b-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:21.244533 master-0 kubenswrapper[4143]: I0313 12:37:21.244178 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-serving-cert\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:21.244533 master-0 kubenswrapper[4143]: I0313 12:37:21.244318 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:21.245045 master-0 kubenswrapper[4143]: I0313 12:37:21.244335 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.245045 master-0 kubenswrapper[4143]: I0313 12:37:21.244350 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.245286 master-0 kubenswrapper[4143]: I0313 12:37:21.245254 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:21.245528 master-0 kubenswrapper[4143]: I0313 12:37:21.245508 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-ca\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.245578 master-0 kubenswrapper[4143]: E0313 12:37:21.243252 4143 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:37:21.247257 master-0 kubenswrapper[4143]: E0313 12:37:21.246017 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert podName:d5a19b80-d488-46d3-a4a8-0b80361077e1 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:21.745999688 +0000 UTC m=+147.493144012 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert") pod "olm-operator-d64cfc9db-rfqb9" (UID: "d5a19b80-d488-46d3-a4a8-0b80361077e1") : secret "olm-operator-serving-cert" not found Mar 13 12:37:21.247257 master-0 kubenswrapper[4143]: I0313 12:37:21.246818 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.247257 master-0 kubenswrapper[4143]: I0313 12:37:21.246891 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:21.247257 master-0 kubenswrapper[4143]: I0313 12:37:21.246913 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5775266-5e58-44ed-81cb-dfe3faf38add-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:21.247257 master-0 kubenswrapper[4143]: E0313 12:37:21.246961 4143 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:37:21.247257 master-0 kubenswrapper[4143]: E0313 12:37:21.246986 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs podName:4c0b18db-06ad-4d58-a353-f6fd96309dea nodeName:}" failed. No retries permitted until 2026-03-13 12:37:21.746976492 +0000 UTC m=+147.494120816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs") pod "multus-admission-controller-8d675b596-96gds" (UID: "4c0b18db-06ad-4d58-a353-f6fd96309dea") : secret "multus-admission-controller-secret" not found Mar 13 12:37:21.247447 master-0 kubenswrapper[4143]: I0313 12:37:21.247324 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9psfn\" (UniqueName: \"kubernetes.io/projected/4c0b18db-06ad-4d58-a353-f6fd96309dea-kube-api-access-9psfn\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:21.247447 master-0 kubenswrapper[4143]: I0313 12:37:21.247369 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:21.248000 master-0 kubenswrapper[4143]: I0313 12:37:21.247973 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w5r2\" (UniqueName: \"kubernetes.io/projected/034aaf8e-95df-4171-bae4-e7abe58d15f7-kube-api-access-5w5r2\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:21.248049 master-0 kubenswrapper[4143]: I0313 12:37:21.248012 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcf05594-4c10-4b54-a47c-d55e323f1f87-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:21.248049 master-0 kubenswrapper[4143]: I0313 12:37:21.248038 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-config\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.248102 master-0 kubenswrapper[4143]: I0313 12:37:21.248061 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da84bb7-e936-49a0-96b5-614a1305d6a4-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:21.248102 master-0 kubenswrapper[4143]: I0313 12:37:21.248085 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:21.248217 master-0 kubenswrapper[4143]: I0313 12:37:21.248116 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c24hd\" (UniqueName: \"kubernetes.io/projected/3020d236-03e0-4916-97dd-f1085632ca43-kube-api-access-c24hd\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:21.248217 master-0 kubenswrapper[4143]: I0313 12:37:21.248155 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0803181-4e37-43fa-8ddc-9c76d3f61817-available-featuregates\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:21.248217 master-0 kubenswrapper[4143]: I0313 12:37:21.248180 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/604456a0-4997-43bc-87ef-283a002111fe-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:21.248217 master-0 kubenswrapper[4143]: I0313 12:37:21.248208 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbk4f\" (UniqueName: \"kubernetes.io/projected/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-kube-api-access-zbk4f\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:21.248315 master-0 kubenswrapper[4143]: I0313 12:37:21.248232 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/887d261f-d07f-4ef0-a230-6568f47acf4d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:21.248315 master-0 kubenswrapper[4143]: I0313 12:37:21.248255 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:21.248315 master-0 kubenswrapper[4143]: I0313 12:37:21.248281 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwjz5\" (UniqueName: \"kubernetes.io/projected/4e279dcc-35e2-4503-babc-978ac208c150-kube-api-access-bwjz5\") pod \"csi-snapshot-controller-operator-5685fbc7d-97wkd\" (UID: \"4e279dcc-35e2-4503-babc-978ac208c150\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd" Mar 13 12:37:21.248315 master-0 kubenswrapper[4143]: I0313 12:37:21.248306 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.248425 master-0 kubenswrapper[4143]: I0313 12:37:21.248332 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:21.248425 master-0 kubenswrapper[4143]: I0313 12:37:21.248357 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-config\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:21.248425 master-0 kubenswrapper[4143]: I0313 12:37:21.248379 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sk7j\" (UniqueName: \"kubernetes.io/projected/604456a0-4997-43bc-87ef-283a002111fe-kube-api-access-8sk7j\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:21.248425 master-0 kubenswrapper[4143]: I0313 12:37:21.248404 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c62b15f-001a-4b64-b85f-348aefde5d1b-config\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:21.248736 master-0 kubenswrapper[4143]: I0313 12:37:21.248671 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da84bb7-e936-49a0-96b5-614a1305d6a4-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:21.249068 master-0 kubenswrapper[4143]: I0313 12:37:21.249044 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c62b15f-001a-4b64-b85f-348aefde5d1b-config\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:21.249150 master-0 kubenswrapper[4143]: E0313 12:37:21.249121 4143 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:21.249190 master-0 kubenswrapper[4143]: E0313 12:37:21.249173 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:21.749161373 +0000 UTC m=+147.496305807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:21.250909 master-0 kubenswrapper[4143]: I0313 12:37:21.249448 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0da84bb7-e936-49a0-96b5-614a1305d6a4-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:21.250909 master-0 kubenswrapper[4143]: I0313 12:37:21.249494 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-config\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.250909 master-0 kubenswrapper[4143]: I0313 12:37:21.249586 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/887d261f-d07f-4ef0-a230-6568f47acf4d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:21.250909 master-0 kubenswrapper[4143]: E0313 12:37:21.249617 4143 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:21.250909 master-0 kubenswrapper[4143]: I0313 12:37:21.249651 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.250909 master-0 kubenswrapper[4143]: E0313 12:37:21.249673 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls podName:13f32761-b386-4f93-b3c0-b16ea53d338a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:21.7496531 +0000 UTC m=+147.496797514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls") pod "dns-operator-589895fbb7-mmwk7" (UID: "13f32761-b386-4f93-b3c0-b16ea53d338a") : secret "metrics-tls" not found Mar 13 12:37:21.250909 master-0 kubenswrapper[4143]: I0313 12:37:21.249786 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0803181-4e37-43fa-8ddc-9c76d3f61817-available-featuregates\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:21.250909 master-0 kubenswrapper[4143]: I0313 12:37:21.250369 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-config\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:21.251645 master-0 kubenswrapper[4143]: I0313 12:37:21.251050 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/604456a0-4997-43bc-87ef-283a002111fe-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:21.253562 master-0 kubenswrapper[4143]: I0313 12:37:21.253529 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3020d236-03e0-4916-97dd-f1085632ca43-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:21.256858 master-0 kubenswrapper[4143]: I0313 12:37:21.256503 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5775266-5e58-44ed-81cb-dfe3faf38add-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:21.259479 master-0 kubenswrapper[4143]: I0313 12:37:21.259449 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c62b15f-001a-4b64-b85f-348aefde5d1b-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:21.265325 master-0 kubenswrapper[4143]: I0313 12:37:21.264701 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-client\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.265824 master-0 kubenswrapper[4143]: I0313 12:37:21.265675 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-serving-cert\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.266324 master-0 kubenswrapper[4143]: I0313 12:37:21.266310 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0803181-4e37-43fa-8ddc-9c76d3f61817-serving-cert\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:21.270233 master-0 kubenswrapper[4143]: I0313 12:37:21.269559 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-serving-cert\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.272931 master-0 kubenswrapper[4143]: I0313 12:37:21.271413 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:21.272931 master-0 kubenswrapper[4143]: I0313 12:37:21.271750 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x8kz\" (UniqueName: \"kubernetes.io/projected/3d653e1a-5903-4a02-9357-df145f028c0d-kube-api-access-6x8kz\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:21.273756 master-0 kubenswrapper[4143]: I0313 12:37:21.273722 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/887d261f-d07f-4ef0-a230-6568f47acf4d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:21.274075 master-0 kubenswrapper[4143]: I0313 12:37:21.274050 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-serving-cert\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:21.274167 master-0 kubenswrapper[4143]: I0313 12:37:21.274108 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q2qc\" (UniqueName: \"kubernetes.io/projected/f5775266-5e58-44ed-81cb-dfe3faf38add-kube-api-access-9q2qc\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:21.274722 master-0 kubenswrapper[4143]: I0313 12:37:21.274690 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2p67\" (UniqueName: \"kubernetes.io/projected/13f32761-b386-4f93-b3c0-b16ea53d338a-kube-api-access-m2p67\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:21.275402 master-0 kubenswrapper[4143]: I0313 12:37:21.275373 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8hcd\" (UniqueName: \"kubernetes.io/projected/d5a19b80-d488-46d3-a4a8-0b80361077e1-kube-api-access-p8hcd\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:21.275466 master-0 kubenswrapper[4143]: I0313 12:37:21.275410 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:21.276405 master-0 kubenswrapper[4143]: I0313 12:37:21.276380 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-bound-sa-token\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:21.276477 master-0 kubenswrapper[4143]: I0313 12:37:21.276377 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0da84bb7-e936-49a0-96b5-614a1305d6a4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:21.277860 master-0 kubenswrapper[4143]: I0313 12:37:21.277831 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clrz7\" (UniqueName: \"kubernetes.io/projected/15b592d6-3c48-45d4-9172-d28632ae8995-kube-api-access-clrz7\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.280082 master-0 kubenswrapper[4143]: I0313 12:37:21.280048 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmfxj\" (UniqueName: \"kubernetes.io/projected/887d261f-d07f-4ef0-a230-6568f47acf4d-kube-api-access-pmfxj\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:21.280599 master-0 kubenswrapper[4143]: I0313 12:37:21.280568 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:21.281914 master-0 kubenswrapper[4143]: I0313 12:37:21.281882 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg8tz\" (UniqueName: \"kubernetes.io/projected/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-kube-api-access-vg8tz\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:21.283466 master-0 kubenswrapper[4143]: I0313 12:37:21.283438 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cf2v\" (UniqueName: \"kubernetes.io/projected/8c62b15f-001a-4b64-b85f-348aefde5d1b-kube-api-access-8cf2v\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:21.289843 master-0 kubenswrapper[4143]: I0313 12:37:21.289802 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4tnq\" (UniqueName: \"kubernetes.io/projected/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-kube-api-access-m4tnq\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.308193 master-0 kubenswrapper[4143]: I0313 12:37:21.308130 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9psfn\" (UniqueName: \"kubernetes.io/projected/4c0b18db-06ad-4d58-a353-f6fd96309dea-kube-api-access-9psfn\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:21.310273 master-0 kubenswrapper[4143]: I0313 12:37:21.310240 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:21.325079 master-0 kubenswrapper[4143]: I0313 12:37:21.325038 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:21.335301 master-0 kubenswrapper[4143]: I0313 12:37:21.335271 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:21.339131 master-0 kubenswrapper[4143]: I0313 12:37:21.339093 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:21.346877 master-0 kubenswrapper[4143]: I0313 12:37:21.346613 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:21.349162 master-0 kubenswrapper[4143]: I0313 12:37:21.349064 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xstz5\" (UniqueName: \"kubernetes.io/projected/08e2bc8e-ca80-454c-81dc-211d122e32e0-kube-api-access-xstz5\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:21.349502 master-0 kubenswrapper[4143]: I0313 12:37:21.349280 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/08e2bc8e-ca80-454c-81dc-211d122e32e0-iptables-alerter-script\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:21.349502 master-0 kubenswrapper[4143]: I0313 12:37:21.349303 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/08e2bc8e-ca80-454c-81dc-211d122e32e0-host-slash\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:21.350278 master-0 kubenswrapper[4143]: I0313 12:37:21.350218 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/08e2bc8e-ca80-454c-81dc-211d122e32e0-host-slash\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:21.350404 master-0 kubenswrapper[4143]: I0313 12:37:21.350378 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/08e2bc8e-ca80-454c-81dc-211d122e32e0-iptables-alerter-script\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:21.376669 master-0 kubenswrapper[4143]: I0313 12:37:21.376591 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwjz5\" (UniqueName: \"kubernetes.io/projected/4e279dcc-35e2-4503-babc-978ac208c150-kube-api-access-bwjz5\") pod \"csi-snapshot-controller-operator-5685fbc7d-97wkd\" (UID: \"4e279dcc-35e2-4503-babc-978ac208c150\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd" Mar 13 12:37:21.397497 master-0 kubenswrapper[4143]: I0313 12:37:21.397450 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sk7j\" (UniqueName: \"kubernetes.io/projected/604456a0-4997-43bc-87ef-283a002111fe-kube-api-access-8sk7j\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:21.402260 master-0 kubenswrapper[4143]: I0313 12:37:21.400456 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:21.411735 master-0 kubenswrapper[4143]: I0313 12:37:21.411694 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c24hd\" (UniqueName: \"kubernetes.io/projected/3020d236-03e0-4916-97dd-f1085632ca43-kube-api-access-c24hd\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:21.434033 master-0 kubenswrapper[4143]: I0313 12:37:21.433992 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd" Mar 13 12:37:21.437052 master-0 kubenswrapper[4143]: I0313 12:37:21.437004 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:21.439188 master-0 kubenswrapper[4143]: I0313 12:37:21.437996 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbk4f\" (UniqueName: \"kubernetes.io/projected/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-kube-api-access-zbk4f\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:21.462543 master-0 kubenswrapper[4143]: I0313 12:37:21.461477 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:21.472216 master-0 kubenswrapper[4143]: I0313 12:37:21.472098 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:21.478986 master-0 kubenswrapper[4143]: I0313 12:37:21.478951 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xstz5\" (UniqueName: \"kubernetes.io/projected/08e2bc8e-ca80-454c-81dc-211d122e32e0-kube-api-access-xstz5\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:21.537494 master-0 kubenswrapper[4143]: I0313 12:37:21.536322 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82"] Mar 13 12:37:21.545372 master-0 kubenswrapper[4143]: I0313 12:37:21.545329 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:21.550091 master-0 kubenswrapper[4143]: I0313 12:37:21.548717 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht"] Mar 13 12:37:21.551891 master-0 kubenswrapper[4143]: I0313 12:37:21.551653 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:21.551990 master-0 kubenswrapper[4143]: E0313 12:37:21.551923 4143 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:37:21.552091 master-0 kubenswrapper[4143]: E0313 12:37:21.552071 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs podName:29b6aa89-0416-4595-9deb-10b290521d86 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:25.551998393 +0000 UTC m=+211.299142757 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs") pod "network-metrics-daemon-r9lmb" (UID: "29b6aa89-0416-4595-9deb-10b290521d86") : secret "metrics-daemon-secret" not found Mar 13 12:37:21.560773 master-0 kubenswrapper[4143]: I0313 12:37:21.559991 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms"] Mar 13 12:37:21.579934 master-0 kubenswrapper[4143]: W0313 12:37:21.572450 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd11f8baa_6e8e_4ac0_9b23_1c44efd0ab2a.slice/crio-b6b12c0272b98e12411fc073869054a756107907b9e525ec9dbf8b8648e84805 WatchSource:0}: Error finding container b6b12c0272b98e12411fc073869054a756107907b9e525ec9dbf8b8648e84805: Status 404 returned error can't find the container with id b6b12c0272b98e12411fc073869054a756107907b9e525ec9dbf8b8648e84805 Mar 13 12:37:21.588164 master-0 kubenswrapper[4143]: W0313 12:37:21.583093 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15b592d6_3c48_45d4_9172_d28632ae8995.slice/crio-5e03538f7a196b4948a3a7782b34246a467d9e14e18b21bed24c1061ee7390ce WatchSource:0}: Error finding container 5e03538f7a196b4948a3a7782b34246a467d9e14e18b21bed24c1061ee7390ce: Status 404 returned error can't find the container with id 5e03538f7a196b4948a3a7782b34246a467d9e14e18b21bed24c1061ee7390ce Mar 13 12:37:21.602717 master-0 kubenswrapper[4143]: I0313 12:37:21.596101 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn"] Mar 13 12:37:21.666349 master-0 kubenswrapper[4143]: I0313 12:37:21.666296 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:21.701543 master-0 kubenswrapper[4143]: I0313 12:37:21.701492 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj"] Mar 13 12:37:21.712798 master-0 kubenswrapper[4143]: I0313 12:37:21.712694 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd"] Mar 13 12:37:21.722226 master-0 kubenswrapper[4143]: W0313 12:37:21.720296 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e279dcc_35e2_4503_babc_978ac208c150.slice/crio-4100d060137e4638140caf3273251902712a7f8176df0de3da8bd3abf9194231 WatchSource:0}: Error finding container 4100d060137e4638140caf3273251902712a7f8176df0de3da8bd3abf9194231: Status 404 returned error can't find the container with id 4100d060137e4638140caf3273251902712a7f8176df0de3da8bd3abf9194231 Mar 13 12:37:21.734260 master-0 kubenswrapper[4143]: I0313 12:37:21.734216 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf"] Mar 13 12:37:21.745627 master-0 kubenswrapper[4143]: W0313 12:37:21.745584 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec5ec2e2_f7b3_43a1_87da_fbbe0ee5b118.slice/crio-f1cb9ab9a282ce90062e66d658d9cac8cb109a67f4786999b66ddea942eec412 WatchSource:0}: Error finding container f1cb9ab9a282ce90062e66d658d9cac8cb109a67f4786999b66ddea942eec412: Status 404 returned error can't find the container with id f1cb9ab9a282ce90062e66d658d9cac8cb109a67f4786999b66ddea942eec412 Mar 13 12:37:21.753077 master-0 kubenswrapper[4143]: I0313 12:37:21.753038 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g"] Mar 13 12:37:21.754718 master-0 kubenswrapper[4143]: I0313 12:37:21.754643 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:21.754870 master-0 kubenswrapper[4143]: E0313 12:37:21.754830 4143 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:21.754870 master-0 kubenswrapper[4143]: I0313 12:37:21.754855 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:21.754964 master-0 kubenswrapper[4143]: E0313 12:37:21.754904 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls podName:604456a0-4997-43bc-87ef-283a002111fe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.754885774 +0000 UTC m=+148.502030098 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-zwtdz" (UID: "604456a0-4997-43bc-87ef-283a002111fe") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:21.755088 master-0 kubenswrapper[4143]: I0313 12:37:21.754979 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:21.755088 master-0 kubenswrapper[4143]: E0313 12:37:21.755008 4143 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:37:21.755088 master-0 kubenswrapper[4143]: E0313 12:37:21.755066 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert podName:10944f9c-8ce9-44e6-9c36-a0ea19d8cae3 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.755048316 +0000 UTC m=+148.502192660 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert") pod "catalog-operator-7d9c49f57b-tlnkd" (UID: "10944f9c-8ce9-44e6-9c36-a0ea19d8cae3") : secret "catalog-operator-serving-cert" not found Mar 13 12:37:21.755088 master-0 kubenswrapper[4143]: E0313 12:37:21.755081 4143 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:37:21.755279 master-0 kubenswrapper[4143]: I0313 12:37:21.755129 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:21.755279 master-0 kubenswrapper[4143]: E0313 12:37:21.755175 4143 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:37:21.755279 master-0 kubenswrapper[4143]: I0313 12:37:21.755197 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:21.755279 master-0 kubenswrapper[4143]: E0313 12:37:21.755209 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert podName:3d653e1a-5903-4a02-9357-df145f028c0d nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.755194188 +0000 UTC m=+148.502338512 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-669qk" (UID: "3d653e1a-5903-4a02-9357-df145f028c0d") : secret "package-server-manager-serving-cert" not found Mar 13 12:37:21.755279 master-0 kubenswrapper[4143]: E0313 12:37:21.755224 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.755218518 +0000 UTC m=+148.502362842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "node-tuning-operator-tls" not found Mar 13 12:37:21.755465 master-0 kubenswrapper[4143]: E0313 12:37:21.755279 4143 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:37:21.755465 master-0 kubenswrapper[4143]: E0313 12:37:21.755317 4143 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:37:21.755465 master-0 kubenswrapper[4143]: E0313 12:37:21.755331 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert podName:d5a19b80-d488-46d3-a4a8-0b80361077e1 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.7553182 +0000 UTC m=+148.502462524 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert") pod "olm-operator-d64cfc9db-rfqb9" (UID: "d5a19b80-d488-46d3-a4a8-0b80361077e1") : secret "olm-operator-serving-cert" not found Mar 13 12:37:21.755465 master-0 kubenswrapper[4143]: I0313 12:37:21.755283 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:21.755465 master-0 kubenswrapper[4143]: E0313 12:37:21.755351 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs podName:4c0b18db-06ad-4d58-a353-f6fd96309dea nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.75534201 +0000 UTC m=+148.502486424 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs") pod "multus-admission-controller-8d675b596-96gds" (UID: "4c0b18db-06ad-4d58-a353-f6fd96309dea") : secret "multus-admission-controller-secret" not found Mar 13 12:37:21.755465 master-0 kubenswrapper[4143]: I0313 12:37:21.755389 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:21.755465 master-0 kubenswrapper[4143]: I0313 12:37:21.755417 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:21.756122 master-0 kubenswrapper[4143]: E0313 12:37:21.755511 4143 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:21.756122 master-0 kubenswrapper[4143]: E0313 12:37:21.755536 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.755529353 +0000 UTC m=+148.502673677 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:21.756122 master-0 kubenswrapper[4143]: E0313 12:37:21.755580 4143 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:21.756122 master-0 kubenswrapper[4143]: E0313 12:37:21.755600 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls podName:13f32761-b386-4f93-b3c0-b16ea53d338a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.755595243 +0000 UTC m=+148.502739567 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls") pod "dns-operator-589895fbb7-mmwk7" (UID: "13f32761-b386-4f93-b3c0-b16ea53d338a") : secret "metrics-tls" not found Mar 13 12:37:21.773321 master-0 kubenswrapper[4143]: I0313 12:37:21.767908 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk"] Mar 13 12:37:21.773321 master-0 kubenswrapper[4143]: W0313 12:37:21.768861 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77ef7e49_eb85_4f5e_94d3_a6a8619a6243.slice/crio-842bc57e6bbe56242bef7b88438357fe374fd511b54a67e77b67b5f32ad709e8 WatchSource:0}: Error finding container 842bc57e6bbe56242bef7b88438357fe374fd511b54a67e77b67b5f32ad709e8: Status 404 returned error can't find the container with id 842bc57e6bbe56242bef7b88438357fe374fd511b54a67e77b67b5f32ad709e8 Mar 13 12:37:21.775971 master-0 kubenswrapper[4143]: W0313 12:37:21.775938 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c62b15f_001a_4b64_b85f_348aefde5d1b.slice/crio-754a980682251c2faf310af15f0042fda13df9ae03c81a3a698c0d687faffa20 WatchSource:0}: Error finding container 754a980682251c2faf310af15f0042fda13df9ae03c81a3a698c0d687faffa20: Status 404 returned error can't find the container with id 754a980682251c2faf310af15f0042fda13df9ae03c81a3a698c0d687faffa20 Mar 13 12:37:21.807775 master-0 kubenswrapper[4143]: I0313 12:37:21.807123 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj"] Mar 13 12:37:21.815173 master-0 kubenswrapper[4143]: W0313 12:37:21.814474 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod089cfabc_9d3d_4260_bb16_8b5eaf73b3fa.slice/crio-c947bd9963641afb60859a3b7c244810b57b25926def17f475843b4b80fe1d04 WatchSource:0}: Error finding container c947bd9963641afb60859a3b7c244810b57b25926def17f475843b4b80fe1d04: Status 404 returned error can't find the container with id c947bd9963641afb60859a3b7c244810b57b25926def17f475843b4b80fe1d04 Mar 13 12:37:21.857443 master-0 kubenswrapper[4143]: I0313 12:37:21.857410 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" event={"ID":"8c62b15f-001a-4b64-b85f-348aefde5d1b","Type":"ContainerStarted","Data":"754a980682251c2faf310af15f0042fda13df9ae03c81a3a698c0d687faffa20"} Mar 13 12:37:21.858560 master-0 kubenswrapper[4143]: I0313 12:37:21.858530 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" event={"ID":"887d261f-d07f-4ef0-a230-6568f47acf4d","Type":"ContainerStarted","Data":"abc95f00c9e0c52ab8e7354cef7b322da886c1a2e03c03fc7c2109630be9ce0b"} Mar 13 12:37:21.859719 master-0 kubenswrapper[4143]: I0313 12:37:21.859678 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" event={"ID":"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a","Type":"ContainerStarted","Data":"b6b12c0272b98e12411fc073869054a756107907b9e525ec9dbf8b8648e84805"} Mar 13 12:37:21.860537 master-0 kubenswrapper[4143]: I0313 12:37:21.860507 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd" event={"ID":"4e279dcc-35e2-4503-babc-978ac208c150","Type":"ContainerStarted","Data":"4100d060137e4638140caf3273251902712a7f8176df0de3da8bd3abf9194231"} Mar 13 12:37:21.861613 master-0 kubenswrapper[4143]: I0313 12:37:21.861563 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" event={"ID":"f5775266-5e58-44ed-81cb-dfe3faf38add","Type":"ContainerStarted","Data":"1f8e6ca57afc2c7f1b75640b9d76490f87697f57e3507366ea9d48c029b1f4d6"} Mar 13 12:37:21.862862 master-0 kubenswrapper[4143]: I0313 12:37:21.862820 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" event={"ID":"77ef7e49-eb85-4f5e-94d3-a6a8619a6243","Type":"ContainerStarted","Data":"842bc57e6bbe56242bef7b88438357fe374fd511b54a67e77b67b5f32ad709e8"} Mar 13 12:37:21.863948 master-0 kubenswrapper[4143]: I0313 12:37:21.863909 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-qz6pg" event={"ID":"08e2bc8e-ca80-454c-81dc-211d122e32e0","Type":"ContainerStarted","Data":"4641cab9868e3327d01299b932a32e6567401ef53f9b8cc74562f50d7b0926ca"} Mar 13 12:37:21.864923 master-0 kubenswrapper[4143]: I0313 12:37:21.864888 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" event={"ID":"0da84bb7-e936-49a0-96b5-614a1305d6a4","Type":"ContainerStarted","Data":"bad7583a8d87a54f610f7ff59977a30650055c862ace4c5e9beab2a18620861a"} Mar 13 12:37:21.866352 master-0 kubenswrapper[4143]: I0313 12:37:21.866316 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" event={"ID":"15b592d6-3c48-45d4-9172-d28632ae8995","Type":"ContainerStarted","Data":"5e03538f7a196b4948a3a7782b34246a467d9e14e18b21bed24c1061ee7390ce"} Mar 13 12:37:21.867760 master-0 kubenswrapper[4143]: I0313 12:37:21.867728 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" event={"ID":"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa","Type":"ContainerStarted","Data":"c947bd9963641afb60859a3b7c244810b57b25926def17f475843b4b80fe1d04"} Mar 13 12:37:21.868754 master-0 kubenswrapper[4143]: I0313 12:37:21.868729 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" event={"ID":"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118","Type":"ContainerStarted","Data":"f1cb9ab9a282ce90062e66d658d9cac8cb109a67f4786999b66ddea942eec412"} Mar 13 12:37:21.935222 master-0 kubenswrapper[4143]: I0313 12:37:21.934632 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 12:37:21.978188 master-0 kubenswrapper[4143]: I0313 12:37:21.978091 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 12:37:22.033163 master-0 kubenswrapper[4143]: I0313 12:37:22.033089 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 12:37:22.040843 master-0 kubenswrapper[4143]: I0313 12:37:22.040802 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034aaf8e-95df-4171-bae4-e7abe58d15f7-serving-cert\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:22.093309 master-0 kubenswrapper[4143]: I0313 12:37:22.091093 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 12:37:22.096947 master-0 kubenswrapper[4143]: I0313 12:37:22.096883 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034aaf8e-95df-4171-bae4-e7abe58d15f7-config\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:22.163431 master-0 kubenswrapper[4143]: I0313 12:37:22.163394 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 12:37:22.168582 master-0 kubenswrapper[4143]: E0313 12:37:22.168550 4143 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:37:22.168688 master-0 kubenswrapper[4143]: E0313 12:37:22.168632 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics podName:d3d998ee-b26f-4e30-83bc-f94f8c68060a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.668600528 +0000 UTC m=+148.415744852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7qhr4" (UID: "d3d998ee-b26f-4e30-83bc-f94f8c68060a") : secret "marketplace-operator-metrics" not found Mar 13 12:37:22.227306 master-0 kubenswrapper[4143]: I0313 12:37:22.227260 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 12:37:22.235694 master-0 kubenswrapper[4143]: I0313 12:37:22.235581 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5nb7\" (UniqueName: \"kubernetes.io/projected/d3d998ee-b26f-4e30-83bc-f94f8c68060a-kube-api-access-x5nb7\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:22.237861 master-0 kubenswrapper[4143]: E0313 12:37:22.237840 4143 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:37:22.237910 master-0 kubenswrapper[4143]: E0313 12:37:22.237897 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls podName:2f79578c-bbfb-4968-893a-730deb4c01f9 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.737882059 +0000 UTC m=+148.485026383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls") pod "ingress-operator-677db989d6-ckl2j" (UID: "2f79578c-bbfb-4968-893a-730deb4c01f9") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:37:22.241398 master-0 kubenswrapper[4143]: E0313 12:37:22.241375 4143 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:37:22.241472 master-0 kubenswrapper[4143]: E0313 12:37:22.241405 4143 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:37:22.241472 master-0 kubenswrapper[4143]: E0313 12:37:22.241417 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls podName:bcf05594-4c10-4b54-a47c-d55e323f1f87 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.741406819 +0000 UTC m=+148.488551143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-q287n" (UID: "bcf05594-4c10-4b54-a47c-d55e323f1f87") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:37:22.241472 master-0 kubenswrapper[4143]: E0313 12:37:22.241447 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-trusted-ca podName:d3d998ee-b26f-4e30-83bc-f94f8c68060a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.74143737 +0000 UTC m=+148.488581694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-trusted-ca") pod "marketplace-operator-64bf9778cb-7qhr4" (UID: "d3d998ee-b26f-4e30-83bc-f94f8c68060a") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:37:22.242643 master-0 kubenswrapper[4143]: I0313 12:37:22.242224 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 12:37:22.242643 master-0 kubenswrapper[4143]: E0313 12:37:22.242237 4143 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:37:22.242643 master-0 kubenswrapper[4143]: E0313 12:37:22.242331 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2f79578c-bbfb-4968-893a-730deb4c01f9-trusted-ca podName:2f79578c-bbfb-4968-893a-730deb4c01f9 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.742314233 +0000 UTC m=+148.489458557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/2f79578c-bbfb-4968-893a-730deb4c01f9-trusted-ca") pod "ingress-operator-677db989d6-ckl2j" (UID: "2f79578c-bbfb-4968-893a-730deb4c01f9") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:37:22.243680 master-0 kubenswrapper[4143]: I0313 12:37:22.243599 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 12:37:22.248157 master-0 kubenswrapper[4143]: I0313 12:37:22.248122 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w5r2\" (UniqueName: \"kubernetes.io/projected/034aaf8e-95df-4171-bae4-e7abe58d15f7-kube-api-access-5w5r2\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:22.251032 master-0 kubenswrapper[4143]: I0313 12:37:22.250868 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcf05594-4c10-4b54-a47c-d55e323f1f87-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:22.267813 master-0 kubenswrapper[4143]: E0313 12:37:22.267775 4143 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:37:22.267934 master-0 kubenswrapper[4143]: E0313 12:37:22.267833 4143 projected.go:194] Error preparing data for projected volume kube-api-access-f9hks for pod openshift-ingress-operator/ingress-operator-677db989d6-ckl2j: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:37:22.267934 master-0 kubenswrapper[4143]: E0313 12:37:22.267902 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-kube-api-access-f9hks podName:2f79578c-bbfb-4968-893a-730deb4c01f9 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.767880598 +0000 UTC m=+148.515024992 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f9hks" (UniqueName: "kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-kube-api-access-f9hks") pod "ingress-operator-677db989d6-ckl2j" (UID: "2f79578c-bbfb-4968-893a-730deb4c01f9") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:37:22.274773 master-0 kubenswrapper[4143]: E0313 12:37:22.274744 4143 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:37:22.274773 master-0 kubenswrapper[4143]: E0313 12:37:22.274770 4143 projected.go:194] Error preparing data for projected volume kube-api-access-lwkdj for pod openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:37:22.274886 master-0 kubenswrapper[4143]: E0313 12:37:22.274816 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f0803181-4e37-43fa-8ddc-9c76d3f61817-kube-api-access-lwkdj podName:f0803181-4e37-43fa-8ddc-9c76d3f61817 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.774801647 +0000 UTC m=+148.521946041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lwkdj" (UniqueName: "kubernetes.io/projected/f0803181-4e37-43fa-8ddc-9c76d3f61817-kube-api-access-lwkdj") pod "openshift-config-operator-64488f9d78-t8fb4" (UID: "f0803181-4e37-43fa-8ddc-9c76d3f61817") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:37:22.308898 master-0 kubenswrapper[4143]: I0313 12:37:22.308692 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 12:37:22.318291 master-0 kubenswrapper[4143]: I0313 12:37:22.318263 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 12:37:22.415643 master-0 kubenswrapper[4143]: I0313 12:37:22.415609 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 12:37:22.433352 master-0 kubenswrapper[4143]: I0313 12:37:22.432990 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:22.496856 master-0 kubenswrapper[4143]: I0313 12:37:22.496774 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 12:37:22.501547 master-0 kubenswrapper[4143]: I0313 12:37:22.501425 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 12:37:22.510849 master-0 kubenswrapper[4143]: I0313 12:37:22.510784 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4hd6\" (UniqueName: \"kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-kube-api-access-j4hd6\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:22.589228 master-0 kubenswrapper[4143]: I0313 12:37:22.588847 4143 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 12:37:22.608632 master-0 kubenswrapper[4143]: I0313 12:37:22.606607 4143 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 12:37:22.631461 master-0 kubenswrapper[4143]: I0313 12:37:22.631411 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz"] Mar 13 12:37:22.642450 master-0 kubenswrapper[4143]: W0313 12:37:22.642414 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod034aaf8e_95df_4171_bae4_e7abe58d15f7.slice/crio-8f2520a5a8a4d59a3a9c1df60e2638463688675ec7d03c44c89816280d167889 WatchSource:0}: Error finding container 8f2520a5a8a4d59a3a9c1df60e2638463688675ec7d03c44c89816280d167889: Status 404 returned error can't find the container with id 8f2520a5a8a4d59a3a9c1df60e2638463688675ec7d03c44c89816280d167889 Mar 13 12:37:22.766597 master-0 kubenswrapper[4143]: I0313 12:37:22.766545 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:22.766864 master-0 kubenswrapper[4143]: I0313 12:37:22.766613 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:22.766864 master-0 kubenswrapper[4143]: I0313 12:37:22.766653 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:22.766864 master-0 kubenswrapper[4143]: I0313 12:37:22.766693 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:22.766864 master-0 kubenswrapper[4143]: I0313 12:37:22.766726 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:22.766864 master-0 kubenswrapper[4143]: I0313 12:37:22.766767 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:22.766864 master-0 kubenswrapper[4143]: I0313 12:37:22.766818 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:22.766864 master-0 kubenswrapper[4143]: I0313 12:37:22.766843 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:22.766864 master-0 kubenswrapper[4143]: I0313 12:37:22.766868 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f79578c-bbfb-4968-893a-730deb4c01f9-trusted-ca\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:22.767460 master-0 kubenswrapper[4143]: I0313 12:37:22.766892 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:22.767460 master-0 kubenswrapper[4143]: I0313 12:37:22.766963 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:22.767460 master-0 kubenswrapper[4143]: I0313 12:37:22.766994 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:22.767460 master-0 kubenswrapper[4143]: I0313 12:37:22.767019 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:22.767460 master-0 kubenswrapper[4143]: E0313 12:37:22.767183 4143 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:37:22.767460 master-0 kubenswrapper[4143]: E0313 12:37:22.767243 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics podName:d3d998ee-b26f-4e30-83bc-f94f8c68060a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:23.767225777 +0000 UTC m=+149.514370101 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7qhr4" (UID: "d3d998ee-b26f-4e30-83bc-f94f8c68060a") : secret "marketplace-operator-metrics" not found Mar 13 12:37:22.767460 master-0 kubenswrapper[4143]: E0313 12:37:22.767402 4143 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:37:22.767789 master-0 kubenswrapper[4143]: E0313 12:37:22.767626 4143 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:37:22.768912 master-0 kubenswrapper[4143]: E0313 12:37:22.768870 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls podName:bcf05594-4c10-4b54-a47c-d55e323f1f87 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:23.767546442 +0000 UTC m=+149.514690786 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-q287n" (UID: "bcf05594-4c10-4b54-a47c-d55e323f1f87") : secret "image-registry-operator-tls" not found Mar 13 12:37:22.768912 master-0 kubenswrapper[4143]: E0313 12:37:22.768908 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:24.768898691 +0000 UTC m=+150.516043085 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "node-tuning-operator-tls" not found Mar 13 12:37:22.769099 master-0 kubenswrapper[4143]: E0313 12:37:22.768991 4143 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:22.769099 master-0 kubenswrapper[4143]: E0313 12:37:22.769035 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls podName:13f32761-b386-4f93-b3c0-b16ea53d338a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:24.769026033 +0000 UTC m=+150.516170457 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls") pod "dns-operator-589895fbb7-mmwk7" (UID: "13f32761-b386-4f93-b3c0-b16ea53d338a") : secret "metrics-tls" not found Mar 13 12:37:22.769099 master-0 kubenswrapper[4143]: I0313 12:37:22.769052 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:22.769099 master-0 kubenswrapper[4143]: E0313 12:37:22.769079 4143 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:22.769099 master-0 kubenswrapper[4143]: E0313 12:37:22.769103 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:24.769096464 +0000 UTC m=+150.516240908 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:22.769974 master-0 kubenswrapper[4143]: E0313 12:37:22.769168 4143 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:22.769974 master-0 kubenswrapper[4143]: E0313 12:37:22.769192 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls podName:604456a0-4997-43bc-87ef-283a002111fe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:24.769185175 +0000 UTC m=+150.516329599 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-zwtdz" (UID: "604456a0-4997-43bc-87ef-283a002111fe") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:22.769974 master-0 kubenswrapper[4143]: E0313 12:37:22.769237 4143 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:22.769974 master-0 kubenswrapper[4143]: E0313 12:37:22.769257 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls podName:2f79578c-bbfb-4968-893a-730deb4c01f9 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:23.769250506 +0000 UTC m=+149.516394930 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls") pod "ingress-operator-677db989d6-ckl2j" (UID: "2f79578c-bbfb-4968-893a-730deb4c01f9") : secret "metrics-tls" not found Mar 13 12:37:22.769974 master-0 kubenswrapper[4143]: E0313 12:37:22.769268 4143 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:37:22.769974 master-0 kubenswrapper[4143]: E0313 12:37:22.769303 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert podName:d5a19b80-d488-46d3-a4a8-0b80361077e1 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:24.769293006 +0000 UTC m=+150.516437330 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert") pod "olm-operator-d64cfc9db-rfqb9" (UID: "d5a19b80-d488-46d3-a4a8-0b80361077e1") : secret "olm-operator-serving-cert" not found Mar 13 12:37:22.769974 master-0 kubenswrapper[4143]: E0313 12:37:22.769305 4143 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:37:22.769974 master-0 kubenswrapper[4143]: E0313 12:37:22.769331 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert podName:10944f9c-8ce9-44e6-9c36-a0ea19d8cae3 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:24.769324687 +0000 UTC m=+150.516469121 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert") pod "catalog-operator-7d9c49f57b-tlnkd" (UID: "10944f9c-8ce9-44e6-9c36-a0ea19d8cae3") : secret "catalog-operator-serving-cert" not found Mar 13 12:37:22.769974 master-0 kubenswrapper[4143]: E0313 12:37:22.769339 4143 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:37:22.769974 master-0 kubenswrapper[4143]: E0313 12:37:22.769360 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs podName:4c0b18db-06ad-4d58-a353-f6fd96309dea nodeName:}" failed. No retries permitted until 2026-03-13 12:37:24.769354037 +0000 UTC m=+150.516498351 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs") pod "multus-admission-controller-8d675b596-96gds" (UID: "4c0b18db-06ad-4d58-a353-f6fd96309dea") : secret "multus-admission-controller-secret" not found Mar 13 12:37:22.769974 master-0 kubenswrapper[4143]: E0313 12:37:22.769633 4143 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:37:22.769974 master-0 kubenswrapper[4143]: E0313 12:37:22.769732 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert podName:3d653e1a-5903-4a02-9357-df145f028c0d nodeName:}" failed. No retries permitted until 2026-03-13 12:37:24.769706512 +0000 UTC m=+150.516850866 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-669qk" (UID: "3d653e1a-5903-4a02-9357-df145f028c0d") : secret "package-server-manager-serving-cert" not found Mar 13 12:37:22.770494 master-0 kubenswrapper[4143]: I0313 12:37:22.770043 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f79578c-bbfb-4968-893a-730deb4c01f9-trusted-ca\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:22.868191 master-0 kubenswrapper[4143]: I0313 12:37:22.868144 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9hks\" (UniqueName: \"kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-kube-api-access-f9hks\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:22.868191 master-0 kubenswrapper[4143]: I0313 12:37:22.868199 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwkdj\" (UniqueName: \"kubernetes.io/projected/f0803181-4e37-43fa-8ddc-9c76d3f61817-kube-api-access-lwkdj\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:22.872599 master-0 kubenswrapper[4143]: I0313 12:37:22.872551 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwkdj\" (UniqueName: \"kubernetes.io/projected/f0803181-4e37-43fa-8ddc-9c76d3f61817-kube-api-access-lwkdj\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:22.879265 master-0 kubenswrapper[4143]: I0313 12:37:22.879224 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" event={"ID":"034aaf8e-95df-4171-bae4-e7abe58d15f7","Type":"ContainerStarted","Data":"8f2520a5a8a4d59a3a9c1df60e2638463688675ec7d03c44c89816280d167889"} Mar 13 12:37:22.882153 master-0 kubenswrapper[4143]: I0313 12:37:22.881779 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" event={"ID":"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118","Type":"ContainerStarted","Data":"69f6736e401004be8e5844a5f9b7891b28a4228a05eb13fc36ff3b64b8740138"} Mar 13 12:37:22.888637 master-0 kubenswrapper[4143]: I0313 12:37:22.888597 4143 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9hks\" (UniqueName: \"kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-kube-api-access-f9hks\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:23.159865 master-0 kubenswrapper[4143]: I0313 12:37:23.159747 4143 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:23.781555 master-0 kubenswrapper[4143]: I0313 12:37:23.781467 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:23.781733 master-0 kubenswrapper[4143]: E0313 12:37:23.781699 4143 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:23.781919 master-0 kubenswrapper[4143]: I0313 12:37:23.781760 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:23.781919 master-0 kubenswrapper[4143]: E0313 12:37:23.781790 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls podName:2f79578c-bbfb-4968-893a-730deb4c01f9 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:25.781766352 +0000 UTC m=+151.528910676 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls") pod "ingress-operator-677db989d6-ckl2j" (UID: "2f79578c-bbfb-4968-893a-730deb4c01f9") : secret "metrics-tls" not found Mar 13 12:37:23.782233 master-0 kubenswrapper[4143]: E0313 12:37:23.781930 4143 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:37:23.782233 master-0 kubenswrapper[4143]: I0313 12:37:23.781951 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:23.782233 master-0 kubenswrapper[4143]: E0313 12:37:23.781996 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls podName:bcf05594-4c10-4b54-a47c-d55e323f1f87 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:25.781982554 +0000 UTC m=+151.529126878 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-q287n" (UID: "bcf05594-4c10-4b54-a47c-d55e323f1f87") : secret "image-registry-operator-tls" not found Mar 13 12:37:23.782233 master-0 kubenswrapper[4143]: E0313 12:37:23.782029 4143 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:37:23.782233 master-0 kubenswrapper[4143]: E0313 12:37:23.782072 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics podName:d3d998ee-b26f-4e30-83bc-f94f8c68060a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:25.782060296 +0000 UTC m=+151.529204690 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7qhr4" (UID: "d3d998ee-b26f-4e30-83bc-f94f8c68060a") : secret "marketplace-operator-metrics" not found Mar 13 12:37:23.945590 master-0 kubenswrapper[4143]: I0313 12:37:23.943130 4143 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" podStartSLOduration=114.943110168 podStartE2EDuration="1m54.943110168s" podCreationTimestamp="2026-03-13 12:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:37:22.902973368 +0000 UTC m=+148.650117722" watchObservedRunningTime="2026-03-13 12:37:23.943110168 +0000 UTC m=+149.690254512" Mar 13 12:37:23.945590 master-0 kubenswrapper[4143]: I0313 12:37:23.943784 4143 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4"] Mar 13 12:37:24.794619 master-0 kubenswrapper[4143]: I0313 12:37:24.794016 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:24.794619 master-0 kubenswrapper[4143]: I0313 12:37:24.794344 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:24.794619 master-0 kubenswrapper[4143]: I0313 12:37:24.794371 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:24.794619 master-0 kubenswrapper[4143]: E0313 12:37:24.794228 4143 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:37:24.794619 master-0 kubenswrapper[4143]: I0313 12:37:24.794402 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:24.794619 master-0 kubenswrapper[4143]: E0313 12:37:24.794471 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert podName:10944f9c-8ce9-44e6-9c36-a0ea19d8cae3 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.7944469 +0000 UTC m=+154.541591294 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert") pod "catalog-operator-7d9c49f57b-tlnkd" (UID: "10944f9c-8ce9-44e6-9c36-a0ea19d8cae3") : secret "catalog-operator-serving-cert" not found Mar 13 12:37:24.794619 master-0 kubenswrapper[4143]: I0313 12:37:24.794507 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:24.794619 master-0 kubenswrapper[4143]: E0313 12:37:24.794529 4143 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:37:24.794619 master-0 kubenswrapper[4143]: E0313 12:37:24.794565 4143 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:37:24.794619 master-0 kubenswrapper[4143]: E0313 12:37:24.794585 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert podName:d5a19b80-d488-46d3-a4a8-0b80361077e1 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.794568172 +0000 UTC m=+154.541712576 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert") pod "olm-operator-d64cfc9db-rfqb9" (UID: "d5a19b80-d488-46d3-a4a8-0b80361077e1") : secret "olm-operator-serving-cert" not found Mar 13 12:37:24.795499 master-0 kubenswrapper[4143]: E0313 12:37:24.794673 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.794645833 +0000 UTC m=+154.541790157 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "node-tuning-operator-tls" not found Mar 13 12:37:24.795499 master-0 kubenswrapper[4143]: E0313 12:37:24.794699 4143 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:37:24.795499 master-0 kubenswrapper[4143]: I0313 12:37:24.794704 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:24.795499 master-0 kubenswrapper[4143]: E0313 12:37:24.794733 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert podName:3d653e1a-5903-4a02-9357-df145f028c0d nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.794720074 +0000 UTC m=+154.541864488 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-669qk" (UID: "3d653e1a-5903-4a02-9357-df145f028c0d") : secret "package-server-manager-serving-cert" not found Mar 13 12:37:24.795499 master-0 kubenswrapper[4143]: I0313 12:37:24.794761 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:24.795499 master-0 kubenswrapper[4143]: E0313 12:37:24.794771 4143 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:24.795499 master-0 kubenswrapper[4143]: E0313 12:37:24.794784 4143 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:37:24.795499 master-0 kubenswrapper[4143]: E0313 12:37:24.794794 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls podName:13f32761-b386-4f93-b3c0-b16ea53d338a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.794787155 +0000 UTC m=+154.541931479 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls") pod "dns-operator-589895fbb7-mmwk7" (UID: "13f32761-b386-4f93-b3c0-b16ea53d338a") : secret "metrics-tls" not found Mar 13 12:37:24.795499 master-0 kubenswrapper[4143]: I0313 12:37:24.794807 4143 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:24.795499 master-0 kubenswrapper[4143]: E0313 12:37:24.794833 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs podName:4c0b18db-06ad-4d58-a353-f6fd96309dea nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.794810346 +0000 UTC m=+154.541954730 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs") pod "multus-admission-controller-8d675b596-96gds" (UID: "4c0b18db-06ad-4d58-a353-f6fd96309dea") : secret "multus-admission-controller-secret" not found Mar 13 12:37:24.795499 master-0 kubenswrapper[4143]: E0313 12:37:24.794869 4143 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:24.795499 master-0 kubenswrapper[4143]: E0313 12:37:24.794924 4143 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:24.795499 master-0 kubenswrapper[4143]: E0313 12:37:24.794925 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.794913297 +0000 UTC m=+154.542057621 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:24.795499 master-0 kubenswrapper[4143]: E0313 12:37:24.794989 4143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls podName:604456a0-4997-43bc-87ef-283a002111fe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.794982448 +0000 UTC m=+154.542126772 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-zwtdz" (UID: "604456a0-4997-43bc-87ef-283a002111fe") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:24.833098 master-0 kubenswrapper[4143]: W0313 12:37:24.832292 4143 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0803181_4e37_43fa_8ddc_9c76d3f61817.slice/crio-5f581d90a0a82a94fc080eaf7d47e92e9bf51aec1be87f8c182f38bf6bb3aa3c WatchSource:0}: Error finding container 5f581d90a0a82a94fc080eaf7d47e92e9bf51aec1be87f8c182f38bf6bb3aa3c: Status 404 returned error can't find the container with id 5f581d90a0a82a94fc080eaf7d47e92e9bf51aec1be87f8c182f38bf6bb3aa3c Mar 13 12:37:24.887686 master-0 kubenswrapper[4143]: I0313 12:37:24.887613 4143 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" event={"ID":"f0803181-4e37-43fa-8ddc-9c76d3f61817","Type":"ContainerStarted","Data":"5f581d90a0a82a94fc080eaf7d47e92e9bf51aec1be87f8c182f38bf6bb3aa3c"} Mar 13 12:37:25.264169 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 13 12:37:25.284311 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 12:37:25.284581 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 13 12:37:25.286376 master-0 systemd[1]: kubelet.service: Consumed 10.380s CPU time. Mar 13 12:37:25.306948 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 13 12:37:25.458357 master-0 kubenswrapper[7518]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:37:25.458357 master-0 kubenswrapper[7518]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 13 12:37:25.458357 master-0 kubenswrapper[7518]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:37:25.458858 master-0 kubenswrapper[7518]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:37:25.458858 master-0 kubenswrapper[7518]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 13 12:37:25.458858 master-0 kubenswrapper[7518]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:37:25.458941 master-0 kubenswrapper[7518]: I0313 12:37:25.458724 7518 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 12:37:25.463800 master-0 kubenswrapper[7518]: W0313 12:37:25.463772 7518 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:37:25.463800 master-0 kubenswrapper[7518]: W0313 12:37:25.463791 7518 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:37:25.463800 master-0 kubenswrapper[7518]: W0313 12:37:25.463796 7518 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:37:25.463888 master-0 kubenswrapper[7518]: W0313 12:37:25.463802 7518 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:37:25.463888 master-0 kubenswrapper[7518]: W0313 12:37:25.463826 7518 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:37:25.463888 master-0 kubenswrapper[7518]: W0313 12:37:25.463874 7518 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:37:25.463993 master-0 kubenswrapper[7518]: W0313 12:37:25.463943 7518 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:37:25.463993 master-0 kubenswrapper[7518]: W0313 12:37:25.463987 7518 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:37:25.463993 master-0 kubenswrapper[7518]: W0313 12:37:25.463992 7518 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:37:25.464074 master-0 kubenswrapper[7518]: W0313 12:37:25.463997 7518 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:37:25.464074 master-0 kubenswrapper[7518]: W0313 12:37:25.464020 7518 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:37:25.464074 master-0 kubenswrapper[7518]: W0313 12:37:25.464025 7518 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:37:25.464074 master-0 kubenswrapper[7518]: W0313 12:37:25.464029 7518 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:37:25.464074 master-0 kubenswrapper[7518]: W0313 12:37:25.464032 7518 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:37:25.464074 master-0 kubenswrapper[7518]: W0313 12:37:25.464036 7518 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:37:25.464074 master-0 kubenswrapper[7518]: W0313 12:37:25.464067 7518 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:37:25.464074 master-0 kubenswrapper[7518]: W0313 12:37:25.464072 7518 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464076 7518 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464096 7518 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464100 7518 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464104 7518 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464108 7518 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464112 7518 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464117 7518 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464123 7518 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464127 7518 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464131 7518 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464156 7518 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464162 7518 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464166 7518 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464198 7518 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464204 7518 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464208 7518 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464211 7518 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464231 7518 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:37:25.464373 master-0 kubenswrapper[7518]: W0313 12:37:25.464236 7518 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464240 7518 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464244 7518 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464248 7518 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464253 7518 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464257 7518 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464261 7518 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464265 7518 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464269 7518 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464273 7518 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464350 7518 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464356 7518 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464364 7518 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464425 7518 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464473 7518 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464480 7518 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464502 7518 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464507 7518 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464511 7518 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:37:25.464816 master-0 kubenswrapper[7518]: W0313 12:37:25.464516 7518 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464520 7518 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464525 7518 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464530 7518 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464534 7518 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464580 7518 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464585 7518 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464589 7518 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464593 7518 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464597 7518 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464600 7518 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464604 7518 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464607 7518 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464611 7518 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464614 7518 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464618 7518 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464621 7518 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: W0313 12:37:25.464625 7518 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: I0313 12:37:25.464776 7518 flags.go:64] FLAG: --address="0.0.0.0" Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: I0313 12:37:25.464788 7518 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: I0313 12:37:25.464796 7518 flags.go:64] FLAG: --anonymous-auth="true" Mar 13 12:37:25.465281 master-0 kubenswrapper[7518]: I0313 12:37:25.464802 7518 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464809 7518 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464813 7518 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464820 7518 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464825 7518 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464830 7518 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464834 7518 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464839 7518 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464843 7518 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464848 7518 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464852 7518 flags.go:64] FLAG: --cgroup-root="" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464856 7518 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464860 7518 flags.go:64] FLAG: --client-ca-file="" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464864 7518 flags.go:64] FLAG: --cloud-config="" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464868 7518 flags.go:64] FLAG: --cloud-provider="" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464872 7518 flags.go:64] FLAG: --cluster-dns="[]" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464876 7518 flags.go:64] FLAG: --cluster-domain="" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464880 7518 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464884 7518 flags.go:64] FLAG: --config-dir="" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464888 7518 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464893 7518 flags.go:64] FLAG: --container-log-max-files="5" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464898 7518 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464902 7518 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464906 7518 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 13 12:37:25.465721 master-0 kubenswrapper[7518]: I0313 12:37:25.464912 7518 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464916 7518 flags.go:64] FLAG: --contention-profiling="false" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464920 7518 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464924 7518 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464929 7518 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464933 7518 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464938 7518 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464942 7518 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464949 7518 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464956 7518 flags.go:64] FLAG: --enable-load-reader="false" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464960 7518 flags.go:64] FLAG: --enable-server="true" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464965 7518 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464971 7518 flags.go:64] FLAG: --event-burst="100" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464975 7518 flags.go:64] FLAG: --event-qps="50" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464979 7518 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464984 7518 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464988 7518 flags.go:64] FLAG: --eviction-hard="" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464994 7518 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.464998 7518 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.465001 7518 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.465006 7518 flags.go:64] FLAG: --eviction-soft="" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.465010 7518 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.465014 7518 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.465018 7518 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.465022 7518 flags.go:64] FLAG: --experimental-mounter-path="" Mar 13 12:37:25.466250 master-0 kubenswrapper[7518]: I0313 12:37:25.465026 7518 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465029 7518 flags.go:64] FLAG: --fail-swap-on="true" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465033 7518 flags.go:64] FLAG: --feature-gates="" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465038 7518 flags.go:64] FLAG: --file-check-frequency="20s" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465043 7518 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465047 7518 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465051 7518 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465056 7518 flags.go:64] FLAG: --healthz-port="10248" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465060 7518 flags.go:64] FLAG: --help="false" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465064 7518 flags.go:64] FLAG: --hostname-override="" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465068 7518 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465072 7518 flags.go:64] FLAG: --http-check-frequency="20s" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465076 7518 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465080 7518 flags.go:64] FLAG: --image-credential-provider-config="" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465084 7518 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465089 7518 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465093 7518 flags.go:64] FLAG: --image-service-endpoint="" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465097 7518 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465101 7518 flags.go:64] FLAG: --kube-api-burst="100" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465105 7518 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465111 7518 flags.go:64] FLAG: --kube-api-qps="50" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465115 7518 flags.go:64] FLAG: --kube-reserved="" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465119 7518 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465123 7518 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465127 7518 flags.go:64] FLAG: --kubelet-cgroups="" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465131 7518 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 13 12:37:25.466784 master-0 kubenswrapper[7518]: I0313 12:37:25.465151 7518 flags.go:64] FLAG: --lock-file="" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465155 7518 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465159 7518 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465164 7518 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465174 7518 flags.go:64] FLAG: --log-json-split-stream="false" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465178 7518 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465182 7518 flags.go:64] FLAG: --log-text-split-stream="false" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465187 7518 flags.go:64] FLAG: --logging-format="text" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465190 7518 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465195 7518 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465199 7518 flags.go:64] FLAG: --manifest-url="" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465203 7518 flags.go:64] FLAG: --manifest-url-header="" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465210 7518 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465214 7518 flags.go:64] FLAG: --max-open-files="1000000" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465219 7518 flags.go:64] FLAG: --max-pods="110" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465223 7518 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465227 7518 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465231 7518 flags.go:64] FLAG: --memory-manager-policy="None" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465236 7518 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465240 7518 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465245 7518 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465249 7518 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465262 7518 flags.go:64] FLAG: --node-status-max-images="50" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465266 7518 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 13 12:37:25.467443 master-0 kubenswrapper[7518]: I0313 12:37:25.465270 7518 flags.go:64] FLAG: --oom-score-adj="-999" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465275 7518 flags.go:64] FLAG: --pod-cidr="" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465278 7518 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465286 7518 flags.go:64] FLAG: --pod-manifest-path="" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465290 7518 flags.go:64] FLAG: --pod-max-pids="-1" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465295 7518 flags.go:64] FLAG: --pods-per-core="0" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465300 7518 flags.go:64] FLAG: --port="10250" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465306 7518 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465310 7518 flags.go:64] FLAG: --provider-id="" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465314 7518 flags.go:64] FLAG: --qos-reserved="" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465318 7518 flags.go:64] FLAG: --read-only-port="10255" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465323 7518 flags.go:64] FLAG: --register-node="true" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465327 7518 flags.go:64] FLAG: --register-schedulable="true" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465331 7518 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465339 7518 flags.go:64] FLAG: --registry-burst="10" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465343 7518 flags.go:64] FLAG: --registry-qps="5" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465347 7518 flags.go:64] FLAG: --reserved-cpus="" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465351 7518 flags.go:64] FLAG: --reserved-memory="" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465356 7518 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465361 7518 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465365 7518 flags.go:64] FLAG: --rotate-certificates="false" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465369 7518 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465373 7518 flags.go:64] FLAG: --runonce="false" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465377 7518 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465381 7518 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 13 12:37:25.467973 master-0 kubenswrapper[7518]: I0313 12:37:25.465386 7518 flags.go:64] FLAG: --seccomp-default="false" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465390 7518 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465394 7518 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465399 7518 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465403 7518 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465407 7518 flags.go:64] FLAG: --storage-driver-password="root" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465411 7518 flags.go:64] FLAG: --storage-driver-secure="false" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465415 7518 flags.go:64] FLAG: --storage-driver-table="stats" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465420 7518 flags.go:64] FLAG: --storage-driver-user="root" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465424 7518 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465428 7518 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465432 7518 flags.go:64] FLAG: --system-cgroups="" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465436 7518 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465442 7518 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465446 7518 flags.go:64] FLAG: --tls-cert-file="" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465450 7518 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465455 7518 flags.go:64] FLAG: --tls-min-version="" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465464 7518 flags.go:64] FLAG: --tls-private-key-file="" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465469 7518 flags.go:64] FLAG: --topology-manager-policy="none" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465473 7518 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465477 7518 flags.go:64] FLAG: --topology-manager-scope="container" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465482 7518 flags.go:64] FLAG: --v="2" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465487 7518 flags.go:64] FLAG: --version="false" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465492 7518 flags.go:64] FLAG: --vmodule="" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465497 7518 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 13 12:37:25.468587 master-0 kubenswrapper[7518]: I0313 12:37:25.465501 7518 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465621 7518 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465627 7518 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465632 7518 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465635 7518 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465639 7518 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465643 7518 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465646 7518 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465650 7518 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465654 7518 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465659 7518 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465663 7518 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465667 7518 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465697 7518 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465703 7518 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465706 7518 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465710 7518 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465715 7518 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465720 7518 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:37:25.469339 master-0 kubenswrapper[7518]: W0313 12:37:25.465725 7518 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465729 7518 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465734 7518 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465738 7518 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465742 7518 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465747 7518 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465751 7518 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465755 7518 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465759 7518 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465772 7518 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465776 7518 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465780 7518 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465784 7518 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465788 7518 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465791 7518 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465795 7518 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465799 7518 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465803 7518 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465807 7518 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465810 7518 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:37:25.469757 master-0 kubenswrapper[7518]: W0313 12:37:25.465814 7518 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465818 7518 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465826 7518 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465829 7518 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465833 7518 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465836 7518 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465839 7518 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465843 7518 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465847 7518 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465850 7518 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465853 7518 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465857 7518 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465860 7518 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465864 7518 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465867 7518 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465871 7518 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465875 7518 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465878 7518 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465882 7518 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465885 7518 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465889 7518 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:37:25.470227 master-0 kubenswrapper[7518]: W0313 12:37:25.465892 7518 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:37:25.470914 master-0 kubenswrapper[7518]: W0313 12:37:25.465896 7518 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:37:25.470914 master-0 kubenswrapper[7518]: W0313 12:37:25.465899 7518 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:37:25.470914 master-0 kubenswrapper[7518]: W0313 12:37:25.465903 7518 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:37:25.470914 master-0 kubenswrapper[7518]: W0313 12:37:25.465914 7518 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:37:25.470914 master-0 kubenswrapper[7518]: W0313 12:37:25.465919 7518 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:37:25.470914 master-0 kubenswrapper[7518]: W0313 12:37:25.465923 7518 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:37:25.470914 master-0 kubenswrapper[7518]: W0313 12:37:25.465927 7518 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:37:25.470914 master-0 kubenswrapper[7518]: W0313 12:37:25.465932 7518 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:37:25.470914 master-0 kubenswrapper[7518]: W0313 12:37:25.465935 7518 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:37:25.470914 master-0 kubenswrapper[7518]: W0313 12:37:25.465940 7518 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:37:25.470914 master-0 kubenswrapper[7518]: W0313 12:37:25.465945 7518 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:37:25.470914 master-0 kubenswrapper[7518]: W0313 12:37:25.465949 7518 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:37:25.470914 master-0 kubenswrapper[7518]: I0313 12:37:25.465965 7518 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:37:25.481503 master-0 kubenswrapper[7518]: I0313 12:37:25.481446 7518 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 13 12:37:25.481503 master-0 kubenswrapper[7518]: I0313 12:37:25.481493 7518 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481613 7518 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481626 7518 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481633 7518 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481640 7518 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481647 7518 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481653 7518 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481659 7518 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481666 7518 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481672 7518 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481679 7518 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481686 7518 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481694 7518 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481700 7518 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481706 7518 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481713 7518 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481719 7518 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481725 7518 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481731 7518 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481737 7518 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:37:25.481729 master-0 kubenswrapper[7518]: W0313 12:37:25.481743 7518 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481749 7518 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481756 7518 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481762 7518 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481767 7518 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481773 7518 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481779 7518 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481786 7518 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481792 7518 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481798 7518 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481807 7518 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481814 7518 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481822 7518 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481829 7518 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481835 7518 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481841 7518 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481847 7518 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481853 7518 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481860 7518 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:37:25.482379 master-0 kubenswrapper[7518]: W0313 12:37:25.481867 7518 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481873 7518 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481879 7518 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481885 7518 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481890 7518 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481895 7518 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481901 7518 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481906 7518 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481912 7518 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481917 7518 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481925 7518 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481933 7518 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481940 7518 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481946 7518 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481954 7518 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481961 7518 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481967 7518 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481972 7518 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481978 7518 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481984 7518 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:37:25.482948 master-0 kubenswrapper[7518]: W0313 12:37:25.481990 7518 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:37:25.483671 master-0 kubenswrapper[7518]: W0313 12:37:25.481996 7518 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:37:25.483671 master-0 kubenswrapper[7518]: W0313 12:37:25.482002 7518 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:37:25.483671 master-0 kubenswrapper[7518]: W0313 12:37:25.482008 7518 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:37:25.483671 master-0 kubenswrapper[7518]: W0313 12:37:25.482015 7518 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:37:25.483671 master-0 kubenswrapper[7518]: W0313 12:37:25.482022 7518 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:37:25.483671 master-0 kubenswrapper[7518]: W0313 12:37:25.482029 7518 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:37:25.483671 master-0 kubenswrapper[7518]: W0313 12:37:25.482035 7518 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:37:25.483671 master-0 kubenswrapper[7518]: W0313 12:37:25.482041 7518 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:37:25.483671 master-0 kubenswrapper[7518]: W0313 12:37:25.482048 7518 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:37:25.483671 master-0 kubenswrapper[7518]: W0313 12:37:25.482055 7518 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:37:25.483671 master-0 kubenswrapper[7518]: W0313 12:37:25.482061 7518 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:37:25.483671 master-0 kubenswrapper[7518]: W0313 12:37:25.482068 7518 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:37:25.483671 master-0 kubenswrapper[7518]: W0313 12:37:25.482075 7518 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:37:25.483671 master-0 kubenswrapper[7518]: I0313 12:37:25.482315 7518 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:37:25.483671 master-0 kubenswrapper[7518]: W0313 12:37:25.482497 7518 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482508 7518 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482515 7518 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482522 7518 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482529 7518 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482534 7518 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482540 7518 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482545 7518 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482551 7518 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482557 7518 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482563 7518 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482575 7518 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482586 7518 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482594 7518 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482603 7518 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482612 7518 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482618 7518 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482625 7518 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482631 7518 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482638 7518 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:37:25.484383 master-0 kubenswrapper[7518]: W0313 12:37:25.482644 7518 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482651 7518 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482658 7518 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482665 7518 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482672 7518 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482679 7518 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482686 7518 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482691 7518 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482698 7518 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482706 7518 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482715 7518 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482722 7518 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482729 7518 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482736 7518 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482743 7518 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482751 7518 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482758 7518 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482764 7518 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482773 7518 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482779 7518 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:37:25.484858 master-0 kubenswrapper[7518]: W0313 12:37:25.482786 7518 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.482984 7518 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.482995 7518 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483002 7518 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483009 7518 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483017 7518 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483028 7518 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483037 7518 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483044 7518 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483051 7518 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483058 7518 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483065 7518 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483071 7518 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483077 7518 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483083 7518 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483088 7518 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483095 7518 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483100 7518 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:37:25.485605 master-0 kubenswrapper[7518]: W0313 12:37:25.483277 7518 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:37:25.486049 master-0 kubenswrapper[7518]: W0313 12:37:25.483319 7518 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:37:25.486049 master-0 kubenswrapper[7518]: W0313 12:37:25.483329 7518 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:37:25.486049 master-0 kubenswrapper[7518]: W0313 12:37:25.483336 7518 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:37:25.486049 master-0 kubenswrapper[7518]: W0313 12:37:25.483343 7518 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:37:25.486049 master-0 kubenswrapper[7518]: W0313 12:37:25.483381 7518 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:37:25.486049 master-0 kubenswrapper[7518]: W0313 12:37:25.483388 7518 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:37:25.486049 master-0 kubenswrapper[7518]: W0313 12:37:25.483395 7518 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:37:25.486049 master-0 kubenswrapper[7518]: W0313 12:37:25.483402 7518 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:37:25.486049 master-0 kubenswrapper[7518]: W0313 12:37:25.483408 7518 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:37:25.486049 master-0 kubenswrapper[7518]: W0313 12:37:25.483415 7518 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:37:25.486049 master-0 kubenswrapper[7518]: W0313 12:37:25.483422 7518 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:37:25.486049 master-0 kubenswrapper[7518]: W0313 12:37:25.483429 7518 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:37:25.486049 master-0 kubenswrapper[7518]: W0313 12:37:25.483435 7518 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:37:25.486049 master-0 kubenswrapper[7518]: I0313 12:37:25.483446 7518 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:37:25.486049 master-0 kubenswrapper[7518]: I0313 12:37:25.483734 7518 server.go:940] "Client rotation is on, will bootstrap in background" Mar 13 12:37:25.486457 master-0 kubenswrapper[7518]: I0313 12:37:25.485637 7518 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 13 12:37:25.486457 master-0 kubenswrapper[7518]: I0313 12:37:25.485730 7518 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 13 12:37:25.486457 master-0 kubenswrapper[7518]: I0313 12:37:25.486068 7518 server.go:997] "Starting client certificate rotation" Mar 13 12:37:25.486457 master-0 kubenswrapper[7518]: I0313 12:37:25.486092 7518 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 13 12:37:25.486457 master-0 kubenswrapper[7518]: I0313 12:37:25.486321 7518 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-14 12:26:40 +0000 UTC, rotation deadline is 2026-03-14 08:35:05.204393907 +0000 UTC Mar 13 12:37:25.486457 master-0 kubenswrapper[7518]: I0313 12:37:25.486385 7518 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h57m39.718010694s for next certificate rotation Mar 13 12:37:25.486951 master-0 kubenswrapper[7518]: I0313 12:37:25.486917 7518 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 12:37:25.490021 master-0 kubenswrapper[7518]: I0313 12:37:25.489976 7518 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 12:37:25.495328 master-0 kubenswrapper[7518]: I0313 12:37:25.495295 7518 log.go:25] "Validated CRI v1 runtime API" Mar 13 12:37:25.499455 master-0 kubenswrapper[7518]: I0313 12:37:25.499408 7518 log.go:25] "Validated CRI v1 image API" Mar 13 12:37:25.500508 master-0 kubenswrapper[7518]: I0313 12:37:25.500479 7518 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 12:37:25.506192 master-0 kubenswrapper[7518]: I0313 12:37:25.506113 7518 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 bee91dc0-9d5b-4e60-b908-76b0c18f6366:/dev/vda3] Mar 13 12:37:25.507716 master-0 kubenswrapper[7518]: I0313 12:37:25.506182 7518 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1f8e6ca57afc2c7f1b75640b9d76490f87697f57e3507366ea9d48c029b1f4d6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1f8e6ca57afc2c7f1b75640b9d76490f87697f57e3507366ea9d48c029b1f4d6/userdata/shm major:0 minor:242 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2bd86a5a786b8cd9854f1e649c41cebb309a3c1ac190ae67ed40c19b3eec0d04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2bd86a5a786b8cd9854f1e649c41cebb309a3c1ac190ae67ed40c19b3eec0d04/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4100d060137e4638140caf3273251902712a7f8176df0de3da8bd3abf9194231/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4100d060137e4638140caf3273251902712a7f8176df0de3da8bd3abf9194231/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4641cab9868e3327d01299b932a32e6567401ef53f9b8cc74562f50d7b0926ca/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4641cab9868e3327d01299b932a32e6567401ef53f9b8cc74562f50d7b0926ca/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4a6cc550d523ce1bfed748c19240f1c4e3a9202060aead91cc14af91ea48f5ce/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4a6cc550d523ce1bfed748c19240f1c4e3a9202060aead91cc14af91ea48f5ce/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/534692e5957aae2c3d6d9152a87bd37d178574b231da74f33889bcb3869aae82/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/534692e5957aae2c3d6d9152a87bd37d178574b231da74f33889bcb3869aae82/userdata/shm major:0 minor:105 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5e03538f7a196b4948a3a7782b34246a467d9e14e18b21bed24c1061ee7390ce/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5e03538f7a196b4948a3a7782b34246a467d9e14e18b21bed24c1061ee7390ce/userdata/shm major:0 minor:240 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5f581d90a0a82a94fc080eaf7d47e92e9bf51aec1be87f8c182f38bf6bb3aa3c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5f581d90a0a82a94fc080eaf7d47e92e9bf51aec1be87f8c182f38bf6bb3aa3c/userdata/shm major:0 minor:303 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7066c2bb7f28cfd07ac1eb011cdc9849969ed5f37788da395910309c70481aa9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7066c2bb7f28cfd07ac1eb011cdc9849969ed5f37788da395910309c70481aa9/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/716ce6662fa89fc5efc984950f9c70517944c523cdede22247748de4ca23948d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/716ce6662fa89fc5efc984950f9c70517944c523cdede22247748de4ca23948d/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/754a980682251c2faf310af15f0042fda13df9ae03c81a3a698c0d687faffa20/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/754a980682251c2faf310af15f0042fda13df9ae03c81a3a698c0d687faffa20/userdata/shm major:0 minor:257 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/842bc57e6bbe56242bef7b88438357fe374fd511b54a67e77b67b5f32ad709e8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/842bc57e6bbe56242bef7b88438357fe374fd511b54a67e77b67b5f32ad709e8/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f2520a5a8a4d59a3a9c1df60e2638463688675ec7d03c44c89816280d167889/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f2520a5a8a4d59a3a9c1df60e2638463688675ec7d03c44c89816280d167889/userdata/shm major:0 minor:296 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f7395682c642b2e4f7ba2a9b79331d0b9afd8c7d7923a7bbdfc90aaeb45a6c2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f7395682c642b2e4f7ba2a9b79331d0b9afd8c7d7923a7bbdfc90aaeb45a6c2/userdata/shm major:0 minor:109 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9b912cc2fb7f1246b6e0fb7957cb5c167f818087772406214ca1bd3f180298fb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9b912cc2fb7f1246b6e0fb7957cb5c167f818087772406214ca1bd3f180298fb/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a13f1b34007cf32fe962f7d50d2988f0f66eb3022aee3b3a767d84bde6caed30/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a13f1b34007cf32fe962f7d50d2988f0f66eb3022aee3b3a767d84bde6caed30/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/abc95f00c9e0c52ab8e7354cef7b322da886c1a2e03c03fc7c2109630be9ce0b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/abc95f00c9e0c52ab8e7354cef7b322da886c1a2e03c03fc7c2109630be9ce0b/userdata/shm major:0 minor:244 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b6b12c0272b98e12411fc073869054a756107907b9e525ec9dbf8b8648e84805/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b6b12c0272b98e12411fc073869054a756107907b9e525ec9dbf8b8648e84805/userdata/shm major:0 minor:237 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bad7583a8d87a54f610f7ff59977a30650055c862ace4c5e9beab2a18620861a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bad7583a8d87a54f610f7ff59977a30650055c862ace4c5e9beab2a18620861a/userdata/shm major:0 minor:248 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c947bd9963641afb60859a3b7c244810b57b25926def17f475843b4b80fe1d04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c947bd9963641afb60859a3b7c244810b57b25926def17f475843b4b80fe1d04/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d54f9c86fd46be5581997805399dc61e82749fea5be883d188b4c6364d1d55b9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d54f9c86fd46be5581997805399dc61e82749fea5be883d188b4c6364d1d55b9/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e2d9f98170b9be57120af2a3d4ad3e87888e64c3d58e7180a2211b7ab3fd61c6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e2d9f98170b9be57120af2a3d4ad3e87888e64c3d58e7180a2211b7ab3fd61c6/userdata/shm major:0 minor:154 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f1cb9ab9a282ce90062e66d658d9cac8cb109a67f4786999b66ddea942eec412/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f1cb9ab9a282ce90062e66d658d9cac8cb109a67f4786999b66ddea942eec412/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f4bdadfb01202ddc6464892800ff63c99a7021c118d9d6dada777648c97106ba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f4bdadfb01202ddc6464892800ff63c99a7021c118d9d6dada777648c97106ba/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/034aaf8e-95df-4171-bae4-e7abe58d15f7/volumes/kubernetes.io~projected/kube-api-access-5w5r2:{mountpoint:/var/lib/kubelet/pods/034aaf8e-95df-4171-bae4-e7abe58d15f7/volumes/kubernetes.io~projected/kube-api-access-5w5r2 major:0 minor:295 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/034aaf8e-95df-4171-bae4-e7abe58d15f7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/034aaf8e-95df-4171-bae4-e7abe58d15f7/volumes/kubernetes.io~secret/serving-cert major:0 minor:289 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa/volumes/kubernetes.io~projected/kube-api-access-vg8tz:{mountpoint:/var/lib/kubelet/pods/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa/volumes/kubernetes.io~projected/kube-api-access-vg8tz major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08e2bc8e-ca80-454c-81dc-211d122e32e0/volumes/kubernetes.io~projected/kube-api-access-xstz5:{mountpoint:/var/lib/kubelet/pods/08e2bc8e-ca80-454c-81dc-211d122e32e0/volumes/kubernetes.io~projected/kube-api-access-xstz5 major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0da84bb7-e936-49a0-96b5-614a1305d6a4/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/0da84bb7-e936-49a0-96b5-614a1305d6a4/volumes/kubernetes.io~projected/kube-api-access major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0da84bb7-e936-49a0-96b5-614a1305d6a4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0da84bb7-e936-49a0-96b5-614a1305d6a4/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3/volumes/kubernetes.io~projected/kube-api-access-zbk4f:{mountpoint:/var/lib/kubelet/pods/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3/volumes/kubernetes.io~projected/kube-api-access-zbk4f major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/13f32761-b386-4f93-b3c0-b16ea53d338a/volumes/kubernetes.io~projected/kube-api-access-m2p67:{mountpoint:/var/lib/kubelet/pods/13f32761-b386-4f93-b3c0-b16ea53d338a/volumes/kubernetes.io~projected/kube-api-access-m2p67 major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/152689b1-5875-4a9a-bb25-bee858523168/volumes/kubernetes.io~projected/kube-api-access-km69t:{mountpoint:/var/lib/kubelet/pods/152689b1-5875-4a9a-bb25-bee858523168/volumes/kubernetes.io~projected/kube-api-access-km69t major:0 minor:115 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~projected/kube-api-access-clrz7:{mountpoint:/var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~projected/kube-api-access-clrz7 major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~secret/etcd-client major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/volumes/kubernetes.io~projected/kube-api-access-brzd4:{mountpoint:/var/lib/kubelet/pods/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/volumes/kubernetes.io~projected/kube-api-access-brzd4 major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/volumes/kubernetes.io~secret/webhook-cert major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29b6aa89-0416-4595-9deb-10b290521d86/volumes/kubernetes.io~projected/kube-api-access-cbtjs:{mountpoint:/var/lib/kubelet/pods/29b6aa89-0416-4595-9deb-10b290521d86/volumes/kubernetes.io~projected/kube-api-access-cbtjs major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f79578c-bbfb-4968-893a-730deb4c01f9/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/2f79578c-bbfb-4968-893a-730deb4c01f9/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f79578c-bbfb-4968-893a-730deb4c01f9/volumes/kubernetes.io~projected/kube-api-access-f9hks:{mountpoint:/var/lib/kubelet/pods/2f79578c-bbfb-4968-893a-730deb4c01f9/volumes/kubernetes.io~projected/kube-api-access-f9hks major:0 minor:302 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3020d236-03e0-4916-97dd-f1085632ca43/volumes/kubernetes.io~projected/kube-api-access-c24hd:{mountpoint:/var/lib/kubelet/pods/3020d236-03e0-4916-97dd-f1085632ca43/volumes/kubernetes.io~projected/kube-api-access-c24hd major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d653e1a-5903-4a02-9357-df145f028c0d/volumes/kubernetes.io~projected/kube-api-access-6x8kz:{mountpoint:/var/lib/kubelet/pods/3d653e1a-5903-4a02-9357-df145f028c0d/volumes/kubernetes.io~projected/kube-api-access-6x8kz major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4c0b18db-06ad-4d58-a353-f6fd96309dea/volumes/kubernetes.io~projected/kube-api-access-9psfn:{mountpoint:/var/lib/kubelet/pods/4c0b18db-06ad-4d58-a353-f6fd96309dea/volumes/kubernetes.io~projected/kube-api-access-9psfn major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4dd0fc2f-f2ee-4447-a747-04a178288cf0/volumes/kubernetes.io~projected/kube-api-access-fnw9d:{mountpoint:/var/lib/kubelet/pods/4dd0fc2f-f2ee-4447-a747-04a178288cf0/volumes/kubernetes.io~projected/kube-api-access-fnw9d major:0 minor:104 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4dd0fc2f-f2ee-4447-a747-04a178288cf0/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/4dd0fc2f-f2ee-4447-a747-04a178288cf0/volumes/kubernetes.io~secret/metrics-tls major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4e279dcc-35e2-4503-babc-978ac208c150/volumes/kubernetes.io~projected/kube-api-access-bwjz5:{mountpoint:/var/lib/kubelet/pods/4e279dcc-35e2-4503-babc-978ac208c150/volumes/kubernetes.io~projected/kube-api-access-bwjz5 major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ae41cff-0949-47f8-aae9-ae133191476d/volumes/kubernetes.io~projected/kube-api-access-mlvjp:{mountpoint:/var/lib/kubelet/pods/5ae41cff-0949-47f8-aae9-ae133191476d/volumes/kubernetes.io~projected/kube-api-access-mlvjp major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ae41cff-0949-47f8-aae9-ae133191476d/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/5ae41cff-0949-47f8-aae9-ae133191476d/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/604456a0-4997-43bc-87ef-283a002111fe/volumes/kubernetes.io~projected/kube-api-access-8sk7j:{mountpoint:/var/lib/kubelet/pods/604456a0-4997-43bc-87ef-283a002111fe/volumes/kubernetes.io~projected/kube-api-access-8sk7j major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77ef7e49-eb85-4f5e-94d3-a6a8619a6243/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/77ef7e49-eb85-4f5e-94d3-a6a8619a6243/volumes/kubernetes.io~projected/kube-api-access major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77ef7e49-eb85-4f5e-94d3-a6a8619a6243/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/77ef7e49-eb85-4f5e-94d3-a6a8619a6243/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/887d261f-d07f-4ef0-a230-6568f47acf4d/volumes/kubernetes.io~projected/kube-api-access-pmfxj:{mountpoint:/var/lib/kubelet/pods/887d261f-d07f-4ef0-a230-6568f47acf4d/volumes/kubernetes.io~projected/kube-api-access-pmfxj major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/887d261f-d07f-4ef0-a230-6568f47acf4d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/887d261f-d07f-4ef0-a230-6568f47acf4d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c62b15f-001a-4b64-b85f-348aefde5d1b/volumes/kubernetes.io~projected/kube-api-access-8cf2v:{mountpoint:/var/lib/kubelet/pods/8c62b15f-001a-4b64-b85f-348aefde5d1b/volumes/kubernetes.io~projected/kube-api-access-8cf2v major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c62b15f-001a-4b64-b85f-348aefde5d1b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8c62b15f-001a-4b64-b85f-348aefde5d1b/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bcf05594-4c10-4b54-a47c-d55e323f1f87/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/bcf05594-4c10-4b54-a47c-d55e323f1f87/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bcf05594-4c10-4b54-a47c-d55e323f1f87/volumes/kubernetes.io~projected/kube-api-access-j4hd6:{mountpoint:/var/lib/kubelet/pods/bcf05594-4c10-4b54-a47c-d55e323f1f87/volumes/kubernetes.io~projected/kube-api-access-j4hd6 major:0 minor:298 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce3a655a-0684-4bc5-ac36-5878507537c7/volumes/kubernetes.io~projected/kube-api-access-vgbvr:{mountpoint:/var/lib/kubelet/pods/ce3a655a-0684-4bc5-ac36-5878507537c7/volumes/kubernetes.io~projected/kube-api-access-vgbvr major:0 minor:103 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a/volumes/kubernetes.io~projected/kube-api-access-m4tnq:{mountpoint:/var/lib/kubelet/pods/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a/volumes/kubernetes.io~projected/kube-api-access-m4tnq major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3d998ee-b26f-4e30-83bc-f94f8c68060a/volumes/kubernetes.io~projected/kube-api-access-x5nb7:{mountpoint:/var/lib/kubelet/pods/d3d998ee-b26f-4e30-83bc-f94f8c68060a/volumes/kubernetes.io~projected/kube-api-access-x5nb7 major:0 minor:294 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d5a19b80-d488-46d3-a4a8-0b80361077e1/volumes/kubernetes.io~projected/kube-api-access-p8hcd:{mountpoint:/var/lib/kubelet/pods/d5a19b80-d488-46d3-a4a8-0b80361077e1/volumes/kubernetes.io~projected/kube-api-access-p8hcd major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volumes/kubernetes.io~projected/kube-api-access-4j5fc:{mountpoint:/var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volumes/kubernetes.io~projected/kube-api-access-4j5fc major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118/volumes/kubernetes.io~projected/kube-api-access major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0803181-4e37-43fa-8ddc-9c76d3f61817/volumes/kubernetes.io~projected/kube-api-access-lwkdj:{mountpoint:/var/lib/kubelet/pods/f0803181-4e37-43fa-8ddc-9c76d3f61817/volumes/kubernetes.io~projected/kube-api-access-lwkdj major:0 minor:301 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0803181-4e37-43fa-8ddc-9c76d3f61817/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f0803181-4e37-43fa-8ddc-9c76d3f61817/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f39d7f76-0075-44c3-9101-eb2607cb176a/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/f39d7f76-0075-44c3-9101-eb2607cb176a/volumes/kubernetes.io~projected/kube-api-access major:0 minor:102 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5775266-5e58-44ed-81cb-dfe3faf38add/volumes/kubernetes.io~projected/kube-api-access-9q2qc:{mountpoint:/var/lib/kubelet/pods/f5775266-5e58-44ed-81cb-dfe3faf38add/volumes/kubernetes.io~projected/kube-api-access-9q2qc major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5775266-5e58-44ed-81cb-dfe3faf38add/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f5775266-5e58-44ed-81cb-dfe3faf38add/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} overlay_0-107:{mountpoint:/var/lib/containers/storage/overlay/2e20967f955dc17b81fb0fd2c7d0fdd1a3bd0b7fe7919562d47cfdf1c031f722/merged major:0 minor:107 fsType:overlay blockSize:0} overlay_0-111:{mountpoint:/var/lib/containers/storage/overlay/e1a5e4a7e8219ade54c4cd7205ee7702377e47b938ab01de7ed63fc320fdffe8/merged major:0 minor:111 fsType:overlay blockSize:0} overlay_0-113:{mountpoint:/var/lib/containers/storage/overlay/3d4e764e3d81e00bc1c750a7459bc527f78f04e84e1f4979b63e8456cdc56641/merged major:0 minor:113 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/f1d57f788e37ef5c0e479319625b466d3e91514b0832a7d9092dd06ac8c13183/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/ed8be71c5b5603cd6326e578c5e816d938fec664fa9b8276a9ea50c6b0d2bd63/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/92891694cb609e66dd1b6dab387e2bd26eb240246c90f46af94477fe31145696/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/cc877fde7f31fc5e1cbeb6e1720f4cded636aea9ffce928b6f8f1ad54dfa4bef/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/32a015fcb0f3b86479f20b31c219510bd81633f65ed4c1bc5928379aa6014692/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-142:{mountpoint:/var/lib/containers/storage/overlay/917e007a1d7671d203e3ff2802697fcc5ae819981ba4cde9010d57b643423205/merged major:0 minor:142 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/30dda23e81937d403326de7acfd8728bc31009bebeafe86e2f25766ca11a77f7/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-151:{mountpoint:/var/lib/containers/storage/overlay/2f33f3257bc017f1c0e7c24dd80e10ff82e96a9615ce3103020cecf679dfd40d/merged major:0 minor:151 fsType:overlay blockSize:0} overlay_0-153:{mountpoint:/var/lib/containers/storage/overlay/a6aefe25a8d1b14eb3a83d79493aad0983b61f6700b994c8505e399421e7e863/merged major:0 minor:153 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/1cbb429ef96723b5cf42252acc5f5f5b9bab63141ec60f3305acdb0e807ba787/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-160:{mountpoint:/var/lib/containers/storage/overlay/d445aaf8f256a6968524045c854f6d590df3c9e79f7df4fe1337e4a9fd8e9f3c/merged major:0 minor:160 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/a511d4e530bae2ea7098b77a49806ce53b838feb7974fa6f0484cfe286ec3208/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/cef9a59a3746f747de2392e7a97329114e96f38de9e8c8d3124cdadb09f17578/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/665af69a019334a47287bba3b8cde3e58d4c9c66f1ef7473e12a80bf336c6342/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/40f978e05b5e73f972dc067d6ff8509e2d7e989cc0468ccaaa09583ef2a945c1/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/c37a231a38815630c08e4ace5fbd4143e34bfbbc88cf5e8853a85dc21308d166/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/3a2bf2da98d53d69d5d830e3f5b340a7cf2ee2b2169e35cdc23a9fab508baf7a/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/7b27df09205a2be77819040a90c82da640712121a65d1c35a2797016e6b8f6e7/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-261:{mountpoint:/var/lib/containers/storage/overlay/13ba0ddbcf6ff704e81c9b6ee7d79d60c7f86e80636650977e2ca49fca1e98cb/merged major:0 minor:261 fsType:overlay blockSize:0} overlay_0-264:{mountpoint:/var/lib/containers/storage/overlay/83efe2b653a10c507af6a3ca04d802921b3bc306bda0d4ac262aeda23cef02ec/merged major:0 minor:264 fsType:overlay blockSize:0} overlay_0-267:{mountpoint:/var/lib/containers/storage/overlay/188bc1bb25c3329d69e0b2ba9ba680434c270e1c1b6f3dbcd2464553cbaa86f2/merged major:0 minor:267 fsType:overlay blockSize:0} overlay_0-269:{mountpoint:/var/lib/containers/storage/overlay/ef8db0ef29fd46395bf6e3a4e90239c04edd0e99000425614f04a2a030ff44d3/merged major:0 minor:269 fsType:overlay blockSize:0} overlay_0-273:{mountpoint:/var/lib/containers/storage/overlay/6c9c2015aafcee37e6cd06805941be811b2d987cbfb8153e130383d3f03f3b68/merged major:0 minor:273 fsType:overlay blockSize:0} overlay_0-275:{mountpoint:/var/lib/containers/storage/overlay/a523ce52b43121deda8adacadb4793f75c6619dbb6dc3df0398ceec1528a6ec6/merged major:0 minor:275 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/f6446735e2f517b4ad9cef6b863e7473947bfaba6d5fd64224c45a1a8f81aeb3/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/a191ec8d75040b4bd0e25736a264b345dadb1622055e3c28f0d15effeb7c2cc6/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/befe27508365f7050634ba2bfdc53f83973c31050528d191653465a81c1106be/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/6c37a5368bd943cda5e0b5b3e3793e2d05cfc6a0c7daf4bb43b0bdac2503e4b5/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/c9a64229e865ac00a85cb8001ef4ee39fb9edb1082b0f9296b78ed1cee3637fd/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/4ab9dd7348814840bbeecb872752be3b29b6ec6b55dde6c134ee08991c6ed5ad/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/52cc66ee674d2377b9d5c2856d59c5b71aff76b47a7cc8a57e1b100114e5e526/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/e90c90f7997a66aa487abd64e7bd3603979c19989ca3ae0c43d3bd647dc71c96/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/478211bd19761919254bb622c8b59161c3e2ea2f7e3fb0e3dfaa62c6fcf6366d/merged major:0 minor:43 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/4e080c6a781ee12c06401fe2864efde381f874b6664131e60e2a215b2d0c3eb4/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/9f11aa105356c5eb39852398ee96f0fd0115f0825bd86059c3993bb540ffc95d/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/bfaca44bc14fee169a996e444985d168af638c9803a281635ac77812283d3a6d/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/c6382117c31802e2b4809dcee430355989731eed4ea867c67ab7c66758edba51/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/a497fa913e710b3b8670c469e6fc8ab93225ea9d88e583ed2c7fee2c7d322ca8/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/3c5218f7ef6e71f25858adc271e7153a8520dbdf9b8e03037b3add4c6ed54006/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/e645a5b810830f0cd11a73fd98a3b91a9d4db4aad9dd44c62f72853c29d8efe4/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/1fefda25276535d6aa735689e8a71be615f16d720261f54c86c0fdbf7fe6138f/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/a7b5c7a0be843f19cd3c2e16a03878bdf537f7c7183a16d9f7ddc807c60ab2e8/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/3ea6ecce2d3f65cdab52aaf151729184b91a5b88c7b4ba8cad58cc4f5bbfd731/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/d200744d56ae1ecc9f8954d4f9c8ccdc2581a642e9a4a4caee61d0c09c93fd66/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/306a24d16944c9022adb0bdf925061b6f5059ecf4c139eac2a8b01a00c5ad084/merged major:0 minor:89 fsType:overlay blockSize:0}] Mar 13 12:37:25.544071 master-0 kubenswrapper[7518]: I0313 12:37:25.543070 7518 manager.go:217] Machine: {Timestamp:2026-03-13 12:37:25.541752494 +0000 UTC m=+0.174821691 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:8daa6345b1f242d1bcc5f3b6bc2ba573 SystemUUID:8daa6345-b1f2-42d1-bcc5-f3b6bc2ba573 BootID:5a21c0be-2989-406d-99e7-723bbc7963b9 Filesystems:[{Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/887d261f-d07f-4ef0-a230-6568f47acf4d/volumes/kubernetes.io~projected/kube-api-access-pmfxj DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a13f1b34007cf32fe962f7d50d2988f0f66eb3022aee3b3a767d84bde6caed30/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-113 DeviceMajor:0 DeviceMinor:113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-153 DeviceMajor:0 DeviceMinor:153 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3020d236-03e0-4916-97dd-f1085632ca43/volumes/kubernetes.io~projected/kube-api-access-c24hd DeviceMajor:0 DeviceMinor:250 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-160 DeviceMajor:0 DeviceMinor:160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0da84bb7-e936-49a0-96b5-614a1305d6a4/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c947bd9963641afb60859a3b7c244810b57b25926def17f475843b4b80fe1d04/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/534692e5957aae2c3d6d9152a87bd37d178574b231da74f33889bcb3869aae82/userdata/shm DeviceMajor:0 DeviceMinor:105 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d3d998ee-b26f-4e30-83bc-f94f8c68060a/volumes/kubernetes.io~projected/kube-api-access-x5nb7 DeviceMajor:0 DeviceMinor:294 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/887d261f-d07f-4ef0-a230-6568f47acf4d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bad7583a8d87a54f610f7ff59977a30650055c862ace4c5e9beab2a18620861a/userdata/shm DeviceMajor:0 DeviceMinor:248 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-142 DeviceMajor:0 DeviceMinor:142 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d5a19b80-d488-46d3-a4a8-0b80361077e1/volumes/kubernetes.io~projected/kube-api-access-p8hcd DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-275 DeviceMajor:0 DeviceMinor:275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5ae41cff-0949-47f8-aae9-ae133191476d/volumes/kubernetes.io~projected/kube-api-access-mlvjp DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e2d9f98170b9be57120af2a3d4ad3e87888e64c3d58e7180a2211b7ab3fd61c6/userdata/shm DeviceMajor:0 DeviceMinor:154 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f5775266-5e58-44ed-81cb-dfe3faf38add/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-264 DeviceMajor:0 DeviceMinor:264 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4a6cc550d523ce1bfed748c19240f1c4e3a9202060aead91cc14af91ea48f5ce/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0da84bb7-e936-49a0-96b5-614a1305d6a4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5ae41cff-0949-47f8-aae9-ae133191476d/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/034aaf8e-95df-4171-bae4-e7abe58d15f7/volumes/kubernetes.io~projected/kube-api-access-5w5r2 DeviceMajor:0 DeviceMinor:295 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f7395682c642b2e4f7ba2a9b79331d0b9afd8c7d7923a7bbdfc90aaeb45a6c2/userdata/shm DeviceMajor:0 DeviceMinor:109 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/034aaf8e-95df-4171-bae4-e7abe58d15f7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:289 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f0803181-4e37-43fa-8ddc-9c76d3f61817/volumes/kubernetes.io~projected/kube-api-access-lwkdj DeviceMajor:0 DeviceMinor:301 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/29b6aa89-0416-4595-9deb-10b290521d86/volumes/kubernetes.io~projected/kube-api-access-cbtjs DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b6b12c0272b98e12411fc073869054a756107907b9e525ec9dbf8b8648e84805/userdata/shm DeviceMajor:0 DeviceMinor:237 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4e279dcc-35e2-4503-babc-978ac208c150/volumes/kubernetes.io~projected/kube-api-access-bwjz5 DeviceMajor:0 DeviceMinor:246 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2f79578c-bbfb-4968-893a-730deb4c01f9/volumes/kubernetes.io~projected/kube-api-access-f9hks DeviceMajor:0 DeviceMinor:302 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4100d060137e4638140caf3273251902712a7f8176df0de3da8bd3abf9194231/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3/volumes/kubernetes.io~projected/kube-api-access-zbk4f DeviceMajor:0 DeviceMinor:251 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8c62b15f-001a-4b64-b85f-348aefde5d1b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5f581d90a0a82a94fc080eaf7d47e92e9bf51aec1be87f8c182f38bf6bb3aa3c/userdata/shm DeviceMajor:0 DeviceMinor:303 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4dd0fc2f-f2ee-4447-a747-04a178288cf0/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:98 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-151 DeviceMajor:0 DeviceMinor:151 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5e03538f7a196b4948a3a7782b34246a467d9e14e18b21bed24c1061ee7390ce/userdata/shm DeviceMajor:0 DeviceMinor:240 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/abc95f00c9e0c52ab8e7354cef7b322da886c1a2e03c03fc7c2109630be9ce0b/userdata/shm DeviceMajor:0 DeviceMinor:244 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d54f9c86fd46be5581997805399dc61e82749fea5be883d188b4c6364d1d55b9/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/152689b1-5875-4a9a-bb25-bee858523168/volumes/kubernetes.io~projected/kube-api-access-km69t DeviceMajor:0 DeviceMinor:115 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2bd86a5a786b8cd9854f1e649c41cebb309a3c1ac190ae67ed40c19b3eec0d04/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/volumes/kubernetes.io~projected/kube-api-access-brzd4 DeviceMajor:0 DeviceMinor:138 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77ef7e49-eb85-4f5e-94d3-a6a8619a6243/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4c0b18db-06ad-4d58-a353-f6fd96309dea/volumes/kubernetes.io~projected/kube-api-access-9psfn DeviceMajor:0 DeviceMinor:236 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8c62b15f-001a-4b64-b85f-348aefde5d1b/volumes/kubernetes.io~projected/kube-api-access-8cf2v DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1f8e6ca57afc2c7f1b75640b9d76490f87697f57e3507366ea9d48c029b1f4d6/userdata/shm DeviceMajor:0 DeviceMinor:242 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/13f32761-b386-4f93-b3c0-b16ea53d338a/volumes/kubernetes.io~projected/kube-api-access-m2p67 DeviceMajor:0 DeviceMinor:229 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4641cab9868e3327d01299b932a32e6567401ef53f9b8cc74562f50d7b0926ca/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-273 DeviceMajor:0 DeviceMinor:273 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-107 DeviceMajor:0 DeviceMinor:107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-111 DeviceMajor:0 DeviceMinor:111 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f0803181-4e37-43fa-8ddc-9c76d3f61817/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/604456a0-4997-43bc-87ef-283a002111fe/volumes/kubernetes.io~projected/kube-api-access-8sk7j DeviceMajor:0 DeviceMinor:247 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f2520a5a8a4d59a3a9c1df60e2638463688675ec7d03c44c89816280d167889/userdata/shm DeviceMajor:0 DeviceMinor:296 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7066c2bb7f28cfd07ac1eb011cdc9849969ed5f37788da395910309c70481aa9/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-261 DeviceMajor:0 DeviceMinor:261 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-267 DeviceMajor:0 DeviceMinor:267 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3d653e1a-5903-4a02-9357-df145f028c0d/volumes/kubernetes.io~projected/kube-api-access-6x8kz DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f5775266-5e58-44ed-81cb-dfe3faf38add/volumes/kubernetes.io~projected/kube-api-access-9q2qc DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-269 DeviceMajor:0 DeviceMinor:269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:139 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/4dd0fc2f-f2ee-4447-a747-04a178288cf0/volumes/kubernetes.io~projected/kube-api-access-fnw9d DeviceMajor:0 DeviceMinor:104 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a/volumes/kubernetes.io~projected/kube-api-access-m4tnq DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/716ce6662fa89fc5efc984950f9c70517944c523cdede22247748de4ca23948d/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bcf05594-4c10-4b54-a47c-d55e323f1f87/volumes/kubernetes.io~projected/kube-api-access-j4hd6 DeviceMajor:0 DeviceMinor:298 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bcf05594-4c10-4b54-a47c-d55e323f1f87/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:239 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/08e2bc8e-ca80-454c-81dc-211d122e32e0/volumes/kubernetes.io~projected/kube-api-access-xstz5 DeviceMajor:0 DeviceMinor:256 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2f79578c-bbfb-4968-893a-730deb4c01f9/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/842bc57e6bbe56242bef7b88438357fe374fd511b54a67e77b67b5f32ad709e8/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ce3a655a-0684-4bc5-ac36-5878507537c7/volumes/kubernetes.io~projected/kube-api-access-vgbvr DeviceMajor:0 DeviceMinor:103 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f1cb9ab9a282ce90062e66d658d9cac8cb109a67f4786999b66ddea942eec412/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f4bdadfb01202ddc6464892800ff63c99a7021c118d9d6dada777648c97106ba/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volumes/kubernetes.io~projected/kube-api-access-4j5fc DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/77ef7e49-eb85-4f5e-94d3-a6a8619a6243/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa/volumes/kubernetes.io~projected/kube-api-access-vg8tz DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/754a980682251c2faf310af15f0042fda13df9ae03c81a3a698c0d687faffa20/userdata/shm DeviceMajor:0 DeviceMinor:257 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9b912cc2fb7f1246b6e0fb7957cb5c167f818087772406214ca1bd3f180298fb/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~projected/kube-api-access-clrz7 DeviceMajor:0 DeviceMinor:232 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f39d7f76-0075-44c3-9101-eb2607cb176a/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:102 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:1f8e6ca57afc2c7 MacAddress:6a:e6:36:bd:b6:e5 Speed:10000 Mtu:8900} {Name:4100d060137e463 MacAddress:2a:9f:c3:7e:23:30 Speed:10000 Mtu:8900} {Name:5e03538f7a196b4 MacAddress:92:c4:5f:6f:78:e4 Speed:10000 Mtu:8900} {Name:5f581d90a0a82a9 MacAddress:8e:68:1e:e2:85:44 Speed:10000 Mtu:8900} {Name:754a980682251c2 MacAddress:7a:52:f5:8d:1b:8f Speed:10000 Mtu:8900} {Name:842bc57e6bbe562 MacAddress:32:a0:fa:9e:a7:0b Speed:10000 Mtu:8900} {Name:8f2520a5a8a4d59 MacAddress:46:80:2b:15:ab:26 Speed:10000 Mtu:8900} {Name:abc95f00c9e0c52 MacAddress:e6:20:8d:42:c0:49 Speed:10000 Mtu:8900} {Name:b6b12c0272b98e1 MacAddress:02:7b:5d:87:32:71 Speed:10000 Mtu:8900} {Name:bad7583a8d87a54 MacAddress:ae:80:27:37:da:25 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:b2:18:3f:c3:47:7c Speed:0 Mtu:8900} {Name:c947bd9963641af MacAddress:22:e1:dc:59:f8:5c Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:68:13:a8 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:45:8d:c9 Speed:-1 Mtu:9000} {Name:f1cb9ab9a282ce9 MacAddress:3e:63:55:f4:78:6a Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:0e:62:76:22:8d:d1 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 13 12:37:25.544071 master-0 kubenswrapper[7518]: I0313 12:37:25.544024 7518 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 13 12:37:25.544492 master-0 kubenswrapper[7518]: I0313 12:37:25.544248 7518 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 13 12:37:25.544826 master-0 kubenswrapper[7518]: I0313 12:37:25.544764 7518 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 13 12:37:25.545117 master-0 kubenswrapper[7518]: I0313 12:37:25.545055 7518 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 12:37:25.545535 master-0 kubenswrapper[7518]: I0313 12:37:25.545113 7518 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 12:37:25.545588 master-0 kubenswrapper[7518]: I0313 12:37:25.545559 7518 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 12:37:25.545588 master-0 kubenswrapper[7518]: I0313 12:37:25.545569 7518 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 12:37:25.545588 master-0 kubenswrapper[7518]: I0313 12:37:25.545577 7518 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 12:37:25.545722 master-0 kubenswrapper[7518]: I0313 12:37:25.545698 7518 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 12:37:25.545919 master-0 kubenswrapper[7518]: I0313 12:37:25.545898 7518 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:37:25.546097 master-0 kubenswrapper[7518]: I0313 12:37:25.546077 7518 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 13 12:37:25.546272 master-0 kubenswrapper[7518]: I0313 12:37:25.546248 7518 kubelet.go:418] "Attempting to sync node with API server" Mar 13 12:37:25.546302 master-0 kubenswrapper[7518]: I0313 12:37:25.546272 7518 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 12:37:25.546335 master-0 kubenswrapper[7518]: I0313 12:37:25.546322 7518 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 13 12:37:25.546364 master-0 kubenswrapper[7518]: I0313 12:37:25.546337 7518 kubelet.go:324] "Adding apiserver pod source" Mar 13 12:37:25.546401 master-0 kubenswrapper[7518]: I0313 12:37:25.546385 7518 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 12:37:25.549205 master-0 kubenswrapper[7518]: I0313 12:37:25.549163 7518 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 13 12:37:25.549447 master-0 kubenswrapper[7518]: I0313 12:37:25.549421 7518 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 13 12:37:25.549773 master-0 kubenswrapper[7518]: I0313 12:37:25.549746 7518 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 13 12:37:25.549986 master-0 kubenswrapper[7518]: I0313 12:37:25.549960 7518 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 13 12:37:25.549986 master-0 kubenswrapper[7518]: I0313 12:37:25.549983 7518 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 13 12:37:25.550078 master-0 kubenswrapper[7518]: I0313 12:37:25.549990 7518 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 13 12:37:25.550078 master-0 kubenswrapper[7518]: I0313 12:37:25.549997 7518 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 13 12:37:25.550078 master-0 kubenswrapper[7518]: I0313 12:37:25.550003 7518 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 13 12:37:25.550078 master-0 kubenswrapper[7518]: I0313 12:37:25.550010 7518 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 13 12:37:25.550078 master-0 kubenswrapper[7518]: I0313 12:37:25.550017 7518 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 13 12:37:25.550078 master-0 kubenswrapper[7518]: I0313 12:37:25.550041 7518 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 13 12:37:25.550078 master-0 kubenswrapper[7518]: I0313 12:37:25.550049 7518 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 13 12:37:25.550078 master-0 kubenswrapper[7518]: I0313 12:37:25.550055 7518 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 13 12:37:25.550078 master-0 kubenswrapper[7518]: I0313 12:37:25.550075 7518 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 13 12:37:25.550078 master-0 kubenswrapper[7518]: I0313 12:37:25.550088 7518 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 13 12:37:25.550404 master-0 kubenswrapper[7518]: I0313 12:37:25.550121 7518 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 13 12:37:25.550611 master-0 kubenswrapper[7518]: I0313 12:37:25.550582 7518 server.go:1280] "Started kubelet" Mar 13 12:37:25.551655 master-0 kubenswrapper[7518]: I0313 12:37:25.551610 7518 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 12:37:25.552094 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 13 12:37:25.554498 master-0 kubenswrapper[7518]: I0313 12:37:25.554070 7518 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 12:37:25.554498 master-0 kubenswrapper[7518]: I0313 12:37:25.554377 7518 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 13 12:37:25.554601 master-0 kubenswrapper[7518]: I0313 12:37:25.554534 7518 server.go:449] "Adding debug handlers to kubelet server" Mar 13 12:37:25.555113 master-0 kubenswrapper[7518]: I0313 12:37:25.554814 7518 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 12:37:25.560838 master-0 kubenswrapper[7518]: I0313 12:37:25.560796 7518 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 13 12:37:25.560838 master-0 kubenswrapper[7518]: I0313 12:37:25.560840 7518 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 12:37:25.561115 master-0 kubenswrapper[7518]: I0313 12:37:25.561076 7518 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 12:37:25.561322 master-0 kubenswrapper[7518]: I0313 12:37:25.561294 7518 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 12:37:25.563090 master-0 kubenswrapper[7518]: I0313 12:37:25.563054 7518 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 13 12:37:25.563090 master-0 kubenswrapper[7518]: I0313 12:37:25.563079 7518 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 13 12:37:25.563302 master-0 kubenswrapper[7518]: I0313 12:37:25.563276 7518 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 13 12:37:25.564707 master-0 kubenswrapper[7518]: I0313 12:37:25.564678 7518 factory.go:55] Registering systemd factory Mar 13 12:37:25.564707 master-0 kubenswrapper[7518]: I0313 12:37:25.564705 7518 factory.go:221] Registration of the systemd container factory successfully Mar 13 12:37:25.565128 master-0 kubenswrapper[7518]: I0313 12:37:25.564994 7518 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 12:26:40 +0000 UTC, rotation deadline is 2026-03-14 06:50:36.481860577 +0000 UTC Mar 13 12:37:25.565177 master-0 kubenswrapper[7518]: I0313 12:37:25.565168 7518 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h13m10.916696218s for next certificate rotation Mar 13 12:37:25.566294 master-0 kubenswrapper[7518]: I0313 12:37:25.566274 7518 factory.go:153] Registering CRI-O factory Mar 13 12:37:25.566406 master-0 kubenswrapper[7518]: I0313 12:37:25.566297 7518 factory.go:221] Registration of the crio container factory successfully Mar 13 12:37:25.568039 master-0 kubenswrapper[7518]: I0313 12:37:25.566466 7518 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 13 12:37:25.568039 master-0 kubenswrapper[7518]: I0313 12:37:25.566495 7518 factory.go:103] Registering Raw factory Mar 13 12:37:25.568039 master-0 kubenswrapper[7518]: I0313 12:37:25.566508 7518 manager.go:1196] Started watching for new ooms in manager Mar 13 12:37:25.568039 master-0 kubenswrapper[7518]: I0313 12:37:25.566552 7518 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 12:37:25.568039 master-0 kubenswrapper[7518]: I0313 12:37:25.566983 7518 manager.go:319] Starting recovery of all containers Mar 13 12:37:25.569526 master-0 kubenswrapper[7518]: I0313 12:37:25.569424 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="034aaf8e-95df-4171-bae4-e7abe58d15f7" volumeName="kubernetes.io/projected/034aaf8e-95df-4171-bae4-e7abe58d15f7-kube-api-access-5w5r2" seLinuxMountContext="" Mar 13 12:37:25.569526 master-0 kubenswrapper[7518]: I0313 12:37:25.569514 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08e2bc8e-ca80-454c-81dc-211d122e32e0" volumeName="kubernetes.io/configmap/08e2bc8e-ca80-454c-81dc-211d122e32e0-iptables-alerter-script" seLinuxMountContext="" Mar 13 12:37:25.569675 master-0 kubenswrapper[7518]: I0313 12:37:25.569530 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b592d6-3c48-45d4-9172-d28632ae8995" volumeName="kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-serving-cert" seLinuxMountContext="" Mar 13 12:37:25.569675 master-0 kubenswrapper[7518]: I0313 12:37:25.569588 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29b6aa89-0416-4595-9deb-10b290521d86" volumeName="kubernetes.io/projected/29b6aa89-0416-4595-9deb-10b290521d86-kube-api-access-cbtjs" seLinuxMountContext="" Mar 13 12:37:25.569675 master-0 kubenswrapper[7518]: I0313 12:37:25.569604 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce3a655a-0684-4bc5-ac36-5878507537c7" volumeName="kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-cni-binary-copy" seLinuxMountContext="" Mar 13 12:37:25.569675 master-0 kubenswrapper[7518]: I0313 12:37:25.569615 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118" volumeName="kubernetes.io/secret/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-serving-cert" seLinuxMountContext="" Mar 13 12:37:25.569675 master-0 kubenswrapper[7518]: I0313 12:37:25.569626 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="10944f9c-8ce9-44e6-9c36-a0ea19d8cae3" volumeName="kubernetes.io/projected/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-kube-api-access-zbk4f" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569669 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="152689b1-5875-4a9a-bb25-bee858523168" volumeName="kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-binary-copy" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569707 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f79578c-bbfb-4968-893a-730deb4c01f9" volumeName="kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-kube-api-access-f9hks" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569752 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcf05594-4c10-4b54-a47c-d55e323f1f87" volumeName="kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-bound-sa-token" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569766 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" volumeName="kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-trusted-ca-bundle" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569778 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f43b4e7-5cd1-46d2-a02e-0d846b2e5182" volumeName="kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-ovnkube-identity-cm" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569790 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" volumeName="kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-config" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569807 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3d998ee-b26f-4e30-83bc-f94f8c68060a" volumeName="kubernetes.io/configmap/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-trusted-ca" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569819 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6226325-c4d9-497e-8d19-a71adc66c5ac" volumeName="kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-config" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569832 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118" volumeName="kubernetes.io/projected/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-kube-api-access" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569865 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0803181-4e37-43fa-8ddc-9c76d3f61817" volumeName="kubernetes.io/empty-dir/f0803181-4e37-43fa-8ddc-9c76d3f61817-available-featuregates" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569876 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="034aaf8e-95df-4171-bae4-e7abe58d15f7" volumeName="kubernetes.io/configmap/034aaf8e-95df-4171-bae4-e7abe58d15f7-config" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569888 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="152689b1-5875-4a9a-bb25-bee858523168" volumeName="kubernetes.io/projected/152689b1-5875-4a9a-bb25-bee858523168-kube-api-access-km69t" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569901 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd0fc2f-f2ee-4447-a747-04a178288cf0" volumeName="kubernetes.io/secret/4dd0fc2f-f2ee-4447-a747-04a178288cf0-metrics-tls" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569912 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3d998ee-b26f-4e30-83bc-f94f8c68060a" volumeName="kubernetes.io/projected/d3d998ee-b26f-4e30-83bc-f94f8c68060a-kube-api-access-x5nb7" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569926 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5775266-5e58-44ed-81cb-dfe3faf38add" volumeName="kubernetes.io/projected/f5775266-5e58-44ed-81cb-dfe3faf38add-kube-api-access-9q2qc" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569936 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="034aaf8e-95df-4171-bae4-e7abe58d15f7" volumeName="kubernetes.io/secret/034aaf8e-95df-4171-bae4-e7abe58d15f7-serving-cert" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569947 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3020d236-03e0-4916-97dd-f1085632ca43" volumeName="kubernetes.io/configmap/3020d236-03e0-4916-97dd-f1085632ca43-trusted-ca" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569960 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77ef7e49-eb85-4f5e-94d3-a6a8619a6243" volumeName="kubernetes.io/projected/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-kube-api-access" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.569972 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6226325-c4d9-497e-8d19-a71adc66c5ac" volumeName="kubernetes.io/secret/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovn-node-metrics-cert" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570016 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0803181-4e37-43fa-8ddc-9c76d3f61817" volumeName="kubernetes.io/projected/f0803181-4e37-43fa-8ddc-9c76d3f61817-kube-api-access-lwkdj" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570033 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d653e1a-5903-4a02-9357-df145f028c0d" volumeName="kubernetes.io/projected/3d653e1a-5903-4a02-9357-df145f028c0d-kube-api-access-6x8kz" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570046 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="604456a0-4997-43bc-87ef-283a002111fe" volumeName="kubernetes.io/projected/604456a0-4997-43bc-87ef-283a002111fe-kube-api-access-8sk7j" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570059 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcf05594-4c10-4b54-a47c-d55e323f1f87" volumeName="kubernetes.io/configmap/bcf05594-4c10-4b54-a47c-d55e323f1f87-trusted-ca" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570072 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" volumeName="kubernetes.io/secret/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-serving-cert" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570084 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="089cfabc-9d3d-4260-bb16-8b5eaf73b3fa" volumeName="kubernetes.io/projected/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-kube-api-access-vg8tz" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570096 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b592d6-3c48-45d4-9172-d28632ae8995" volumeName="kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-service-ca" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570108 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b592d6-3c48-45d4-9172-d28632ae8995" volumeName="kubernetes.io/projected/15b592d6-3c48-45d4-9172-d28632ae8995-kube-api-access-clrz7" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570119 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="152689b1-5875-4a9a-bb25-bee858523168" volumeName="kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-sysctl-allowlist" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570158 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae41cff-0949-47f8-aae9-ae133191476d" volumeName="kubernetes.io/projected/5ae41cff-0949-47f8-aae9-ae133191476d-kube-api-access-mlvjp" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570174 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="604456a0-4997-43bc-87ef-283a002111fe" volumeName="kubernetes.io/configmap/604456a0-4997-43bc-87ef-283a002111fe-telemetry-config" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570212 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" volumeName="kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-service-ca-bundle" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570232 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f43b4e7-5cd1-46d2-a02e-0d846b2e5182" volumeName="kubernetes.io/projected/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-kube-api-access-brzd4" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570246 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118" volumeName="kubernetes.io/configmap/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-config" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570258 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f39d7f76-0075-44c3-9101-eb2607cb176a" volumeName="kubernetes.io/projected/f39d7f76-0075-44c3-9101-eb2607cb176a-kube-api-access" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570298 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="089cfabc-9d3d-4260-bb16-8b5eaf73b3fa" volumeName="kubernetes.io/secret/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-serving-cert" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570316 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f79578c-bbfb-4968-893a-730deb4c01f9" volumeName="kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-bound-sa-token" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570327 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e279dcc-35e2-4503-babc-978ac208c150" volumeName="kubernetes.io/projected/4e279dcc-35e2-4503-babc-978ac208c150-kube-api-access-bwjz5" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570341 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d261f-d07f-4ef0-a230-6568f47acf4d" volumeName="kubernetes.io/secret/887d261f-d07f-4ef0-a230-6568f47acf4d-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570353 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c62b15f-001a-4b64-b85f-348aefde5d1b" volumeName="kubernetes.io/configmap/8c62b15f-001a-4b64-b85f-348aefde5d1b-config" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570364 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce3a655a-0684-4bc5-ac36-5878507537c7" volumeName="kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-daemon-config" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570375 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5775266-5e58-44ed-81cb-dfe3faf38add" volumeName="kubernetes.io/secret/f5775266-5e58-44ed-81cb-dfe3faf38add-serving-cert" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570386 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="089cfabc-9d3d-4260-bb16-8b5eaf73b3fa" volumeName="kubernetes.io/configmap/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-config" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570397 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08e2bc8e-ca80-454c-81dc-211d122e32e0" volumeName="kubernetes.io/projected/08e2bc8e-ca80-454c-81dc-211d122e32e0-kube-api-access-xstz5" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570408 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b592d6-3c48-45d4-9172-d28632ae8995" volumeName="kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-client" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570419 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd0fc2f-f2ee-4447-a747-04a178288cf0" volumeName="kubernetes.io/projected/4dd0fc2f-f2ee-4447-a747-04a178288cf0-kube-api-access-fnw9d" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570466 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae41cff-0949-47f8-aae9-ae133191476d" volumeName="kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-env-overrides" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570482 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77ef7e49-eb85-4f5e-94d3-a6a8619a6243" volumeName="kubernetes.io/secret/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-serving-cert" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570494 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b592d6-3c48-45d4-9172-d28632ae8995" volumeName="kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-ca" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570507 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae41cff-0949-47f8-aae9-ae133191476d" volumeName="kubernetes.io/secret/5ae41cff-0949-47f8-aae9-ae133191476d-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570521 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f39d7f76-0075-44c3-9101-eb2607cb176a" volumeName="kubernetes.io/configmap/f39d7f76-0075-44c3-9101-eb2607cb176a-service-ca" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570533 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0da84bb7-e936-49a0-96b5-614a1305d6a4" volumeName="kubernetes.io/configmap/0da84bb7-e936-49a0-96b5-614a1305d6a4-config" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570545 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="13f32761-b386-4f93-b3c0-b16ea53d338a" volumeName="kubernetes.io/projected/13f32761-b386-4f93-b3c0-b16ea53d338a-kube-api-access-m2p67" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570557 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f79578c-bbfb-4968-893a-730deb4c01f9" volumeName="kubernetes.io/configmap/2f79578c-bbfb-4968-893a-730deb4c01f9-trusted-ca" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570569 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae41cff-0949-47f8-aae9-ae133191476d" volumeName="kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-ovnkube-config" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570581 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c62b15f-001a-4b64-b85f-348aefde5d1b" volumeName="kubernetes.io/secret/8c62b15f-001a-4b64-b85f-348aefde5d1b-serving-cert" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570593 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcf05594-4c10-4b54-a47c-d55e323f1f87" volumeName="kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-kube-api-access-j4hd6" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570604 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" volumeName="kubernetes.io/projected/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-kube-api-access-m4tnq" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570616 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce3a655a-0684-4bc5-ac36-5878507537c7" volumeName="kubernetes.io/projected/ce3a655a-0684-4bc5-ac36-5878507537c7-kube-api-access-vgbvr" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570627 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="152689b1-5875-4a9a-bb25-bee858523168" volumeName="kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-whereabouts-configmap" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570640 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f43b4e7-5cd1-46d2-a02e-0d846b2e5182" volumeName="kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-env-overrides" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570651 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f43b4e7-5cd1-46d2-a02e-0d846b2e5182" volumeName="kubernetes.io/secret/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-webhook-cert" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570664 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3020d236-03e0-4916-97dd-f1085632ca43" volumeName="kubernetes.io/projected/3020d236-03e0-4916-97dd-f1085632ca43-kube-api-access-c24hd" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570675 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d261f-d07f-4ef0-a230-6568f47acf4d" volumeName="kubernetes.io/empty-dir/887d261f-d07f-4ef0-a230-6568f47acf4d-operand-assets" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570694 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d261f-d07f-4ef0-a230-6568f47acf4d" volumeName="kubernetes.io/projected/887d261f-d07f-4ef0-a230-6568f47acf4d-kube-api-access-pmfxj" seLinuxMountContext="" Mar 13 12:37:25.570625 master-0 kubenswrapper[7518]: I0313 12:37:25.570706 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c62b15f-001a-4b64-b85f-348aefde5d1b" volumeName="kubernetes.io/projected/8c62b15f-001a-4b64-b85f-348aefde5d1b-kube-api-access-8cf2v" seLinuxMountContext="" Mar 13 12:37:25.574318 master-0 kubenswrapper[7518]: I0313 12:37:25.570757 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6226325-c4d9-497e-8d19-a71adc66c5ac" volumeName="kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-env-overrides" seLinuxMountContext="" Mar 13 12:37:25.574318 master-0 kubenswrapper[7518]: I0313 12:37:25.570772 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0da84bb7-e936-49a0-96b5-614a1305d6a4" volumeName="kubernetes.io/projected/0da84bb7-e936-49a0-96b5-614a1305d6a4-kube-api-access" seLinuxMountContext="" Mar 13 12:37:25.574318 master-0 kubenswrapper[7518]: I0313 12:37:25.570783 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0da84bb7-e936-49a0-96b5-614a1305d6a4" volumeName="kubernetes.io/secret/0da84bb7-e936-49a0-96b5-614a1305d6a4-serving-cert" seLinuxMountContext="" Mar 13 12:37:25.574318 master-0 kubenswrapper[7518]: I0313 12:37:25.570828 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c0b18db-06ad-4d58-a353-f6fd96309dea" volumeName="kubernetes.io/projected/4c0b18db-06ad-4d58-a353-f6fd96309dea-kube-api-access-9psfn" seLinuxMountContext="" Mar 13 12:37:25.574318 master-0 kubenswrapper[7518]: I0313 12:37:25.570852 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5a19b80-d488-46d3-a4a8-0b80361077e1" volumeName="kubernetes.io/projected/d5a19b80-d488-46d3-a4a8-0b80361077e1-kube-api-access-p8hcd" seLinuxMountContext="" Mar 13 12:37:25.574318 master-0 kubenswrapper[7518]: I0313 12:37:25.570882 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5775266-5e58-44ed-81cb-dfe3faf38add" volumeName="kubernetes.io/configmap/f5775266-5e58-44ed-81cb-dfe3faf38add-config" seLinuxMountContext="" Mar 13 12:37:25.574318 master-0 kubenswrapper[7518]: I0313 12:37:25.570910 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b592d6-3c48-45d4-9172-d28632ae8995" volumeName="kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-config" seLinuxMountContext="" Mar 13 12:37:25.574318 master-0 kubenswrapper[7518]: I0313 12:37:25.570921 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77ef7e49-eb85-4f5e-94d3-a6a8619a6243" volumeName="kubernetes.io/configmap/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-config" seLinuxMountContext="" Mar 13 12:37:25.574318 master-0 kubenswrapper[7518]: I0313 12:37:25.570933 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6226325-c4d9-497e-8d19-a71adc66c5ac" volumeName="kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-script-lib" seLinuxMountContext="" Mar 13 12:37:25.574318 master-0 kubenswrapper[7518]: I0313 12:37:25.570956 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6226325-c4d9-497e-8d19-a71adc66c5ac" volumeName="kubernetes.io/projected/d6226325-c4d9-497e-8d19-a71adc66c5ac-kube-api-access-4j5fc" seLinuxMountContext="" Mar 13 12:37:25.574318 master-0 kubenswrapper[7518]: I0313 12:37:25.570971 7518 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0803181-4e37-43fa-8ddc-9c76d3f61817" volumeName="kubernetes.io/secret/f0803181-4e37-43fa-8ddc-9c76d3f61817-serving-cert" seLinuxMountContext="" Mar 13 12:37:25.574318 master-0 kubenswrapper[7518]: I0313 12:37:25.570982 7518 reconstruct.go:97] "Volume reconstruction finished" Mar 13 12:37:25.574318 master-0 kubenswrapper[7518]: I0313 12:37:25.570995 7518 reconciler.go:26] "Reconciler: start to sync state" Mar 13 12:37:25.574824 master-0 kubenswrapper[7518]: I0313 12:37:25.574746 7518 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 13 12:37:25.595531 master-0 kubenswrapper[7518]: I0313 12:37:25.595463 7518 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 13 12:37:25.596871 master-0 kubenswrapper[7518]: I0313 12:37:25.596830 7518 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 13 12:37:25.596947 master-0 kubenswrapper[7518]: I0313 12:37:25.596905 7518 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 13 12:37:25.596947 master-0 kubenswrapper[7518]: I0313 12:37:25.596925 7518 kubelet.go:2335] "Starting kubelet main sync loop" Mar 13 12:37:25.597017 master-0 kubenswrapper[7518]: E0313 12:37:25.596991 7518 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 12:37:25.598983 master-0 kubenswrapper[7518]: I0313 12:37:25.598912 7518 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 12:37:25.611740 master-0 kubenswrapper[7518]: I0313 12:37:25.611688 7518 generic.go:334] "Generic (PLEG): container finished" podID="72ba330e-35ca-4d05-8641-a880bf30c0e7" containerID="1af7a53388bbd243cf9640d283230185be1782a2bdb43e5850dd6d341044a303" exitCode=0 Mar 13 12:37:25.615298 master-0 kubenswrapper[7518]: I0313 12:37:25.615240 7518 generic.go:334] "Generic (PLEG): container finished" podID="3dca6a91-7c31-44d2-89eb-c2c5f941e983" containerID="3c4695e1552ba9205d33b8d7524c5a76469234a9b454c27b01c396a95436c2b9" exitCode=0 Mar 13 12:37:25.629216 master-0 kubenswrapper[7518]: I0313 12:37:25.629176 7518 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="1e34a2d26492b3df232459c166da8fc0ebb8dbb2c47bdf38857a1fe49a541e66" exitCode=0 Mar 13 12:37:25.629216 master-0 kubenswrapper[7518]: I0313 12:37:25.629206 7518 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="e1467141e26d577aa41ff200895deb27986a626bccdf77e649db90ad9f882528" exitCode=0 Mar 13 12:37:25.629216 master-0 kubenswrapper[7518]: I0313 12:37:25.629215 7518 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="7c57d841a99a5e2cd1a42f48f3248a346104a0d155b92d640bd1a07ffd81b262" exitCode=0 Mar 13 12:37:25.629216 master-0 kubenswrapper[7518]: I0313 12:37:25.629222 7518 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="ec83ba0b787947b6a285aac754b05fb294210ab326a2dc10a91b47f74ad8a542" exitCode=0 Mar 13 12:37:25.629487 master-0 kubenswrapper[7518]: I0313 12:37:25.629229 7518 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="134471a7b38bb354ac04a0f22e311d7bea5264435a237eafabc1ded333b762d2" exitCode=0 Mar 13 12:37:25.629487 master-0 kubenswrapper[7518]: I0313 12:37:25.629258 7518 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="ae6f8708327259b51cf004983ebe879d244aef1bf9515e029c5674f436c5c187" exitCode=0 Mar 13 12:37:25.637209 master-0 kubenswrapper[7518]: I0313 12:37:25.637176 7518 generic.go:334] "Generic (PLEG): container finished" podID="d6226325-c4d9-497e-8d19-a71adc66c5ac" containerID="cf1959de89eea014cb32ef2948333cb70b4954efbb9bc7376a990fcbbdb918ce" exitCode=0 Mar 13 12:37:25.644355 master-0 kubenswrapper[7518]: I0313 12:37:25.644316 7518 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="9976faf535c3de998191b8eb2224b47994a3c8d30cd6f57ea4e1d4aff13da677" exitCode=1 Mar 13 12:37:25.648269 master-0 kubenswrapper[7518]: I0313 12:37:25.648244 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 12:37:25.648638 master-0 kubenswrapper[7518]: I0313 12:37:25.648606 7518 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="9c887f2b6cfcfcc1f3ea186daee81cbe3bce3c155cfd4e9bbac88f712c489339" exitCode=1 Mar 13 12:37:25.648638 master-0 kubenswrapper[7518]: I0313 12:37:25.648635 7518 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="d97124951202d97d2b090945a6d5c9c5add42850ba499052ed07d95631932324" exitCode=0 Mar 13 12:37:25.657501 master-0 kubenswrapper[7518]: I0313 12:37:25.657467 7518 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="a3279720d4c802c349d222cf1b96260384211d9adc25c84b50972505c95ca211" exitCode=0 Mar 13 12:37:25.695441 master-0 kubenswrapper[7518]: I0313 12:37:25.695413 7518 manager.go:324] Recovery completed Mar 13 12:37:25.697058 master-0 kubenswrapper[7518]: E0313 12:37:25.697036 7518 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 12:37:25.734950 master-0 kubenswrapper[7518]: I0313 12:37:25.734910 7518 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 13 12:37:25.734950 master-0 kubenswrapper[7518]: I0313 12:37:25.734931 7518 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 12:37:25.734950 master-0 kubenswrapper[7518]: I0313 12:37:25.734950 7518 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:37:25.735249 master-0 kubenswrapper[7518]: I0313 12:37:25.735231 7518 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 12:37:25.735290 master-0 kubenswrapper[7518]: I0313 12:37:25.735247 7518 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 12:37:25.735290 master-0 kubenswrapper[7518]: I0313 12:37:25.735266 7518 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 13 12:37:25.735290 master-0 kubenswrapper[7518]: I0313 12:37:25.735272 7518 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 13 12:37:25.735290 master-0 kubenswrapper[7518]: I0313 12:37:25.735278 7518 policy_none.go:49] "None policy: Start" Mar 13 12:37:25.736947 master-0 kubenswrapper[7518]: I0313 12:37:25.736927 7518 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 13 12:37:25.736997 master-0 kubenswrapper[7518]: I0313 12:37:25.736951 7518 state_mem.go:35] "Initializing new in-memory state store" Mar 13 12:37:25.737467 master-0 kubenswrapper[7518]: I0313 12:37:25.737449 7518 state_mem.go:75] "Updated machine memory state" Mar 13 12:37:25.737504 master-0 kubenswrapper[7518]: I0313 12:37:25.737468 7518 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 13 12:37:25.747501 master-0 kubenswrapper[7518]: I0313 12:37:25.747481 7518 manager.go:334] "Starting Device Plugin manager" Mar 13 12:37:25.747575 master-0 kubenswrapper[7518]: I0313 12:37:25.747553 7518 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 13 12:37:25.747611 master-0 kubenswrapper[7518]: I0313 12:37:25.747576 7518 server.go:79] "Starting device plugin registration server" Mar 13 12:37:25.748047 master-0 kubenswrapper[7518]: I0313 12:37:25.748026 7518 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 12:37:25.748127 master-0 kubenswrapper[7518]: I0313 12:37:25.748045 7518 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 12:37:25.748270 master-0 kubenswrapper[7518]: I0313 12:37:25.748253 7518 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 13 12:37:25.748395 master-0 kubenswrapper[7518]: I0313 12:37:25.748371 7518 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 13 12:37:25.748395 master-0 kubenswrapper[7518]: I0313 12:37:25.748392 7518 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 12:37:25.848641 master-0 kubenswrapper[7518]: I0313 12:37:25.848443 7518 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:37:25.852450 master-0 kubenswrapper[7518]: I0313 12:37:25.852421 7518 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:37:25.852528 master-0 kubenswrapper[7518]: I0313 12:37:25.852459 7518 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:37:25.852528 master-0 kubenswrapper[7518]: I0313 12:37:25.852470 7518 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:37:25.852600 master-0 kubenswrapper[7518]: I0313 12:37:25.852528 7518 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:37:25.862337 master-0 kubenswrapper[7518]: I0313 12:37:25.862281 7518 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 13 12:37:25.862452 master-0 kubenswrapper[7518]: I0313 12:37:25.862393 7518 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 13 12:37:25.897748 master-0 kubenswrapper[7518]: I0313 12:37:25.897692 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bb07b8ac3a9143900e44c8646ee6fb8d832847a79c050ce5b93154ab39c7aad" Mar 13 12:37:25.897937 master-0 kubenswrapper[7518]: I0313 12:37:25.897753 7518 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 12:37:25.898279 master-0 kubenswrapper[7518]: I0313 12:37:25.898195 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"23aef1d459d801451207b22b103d82e16b0fb29eac9febd8e8918cd59b44679c"} Mar 13 12:37:25.898279 master-0 kubenswrapper[7518]: I0313 12:37:25.898273 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"d54f9c86fd46be5581997805399dc61e82749fea5be883d188b4c6364d1d55b9"} Mar 13 12:37:25.898405 master-0 kubenswrapper[7518]: I0313 12:37:25.898295 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aab59fe84d74f1f2dbe3af4167877250fbae9e62f4ef0e21a64f79bf2216fbcc" Mar 13 12:37:25.898405 master-0 kubenswrapper[7518]: I0313 12:37:25.898310 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03d9961b19ee86b8273aed8f3384d8fd0d0f86ad2c87207be32ea4592b6ddf9b" Mar 13 12:37:25.898405 master-0 kubenswrapper[7518]: I0313 12:37:25.898317 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"c8e034500e686ef70dacdb42d92b730454c21d98abd545c3173a8492bf764cbb"} Mar 13 12:37:25.898405 master-0 kubenswrapper[7518]: I0313 12:37:25.898326 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"e408fc0e8cb4ee12255385245e6376d6aaefa9c98b225370a726fb0b9f89662c"} Mar 13 12:37:25.898405 master-0 kubenswrapper[7518]: I0313 12:37:25.898334 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"716ce6662fa89fc5efc984950f9c70517944c523cdede22247748de4ca23948d"} Mar 13 12:37:25.898405 master-0 kubenswrapper[7518]: I0313 12:37:25.898397 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"5f035fb00c2f1c52dbc78fa55ac7bc8d27c14c42f3da11b968e1fb6e88e80856"} Mar 13 12:37:25.898405 master-0 kubenswrapper[7518]: I0313 12:37:25.898410 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"982c1c225b535e0fa3c9e5b01c4c3960b52c601ea135812c4af51bc13c9b4e1a"} Mar 13 12:37:25.898597 master-0 kubenswrapper[7518]: I0313 12:37:25.898419 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"9976faf535c3de998191b8eb2224b47994a3c8d30cd6f57ea4e1d4aff13da677"} Mar 13 12:37:25.898597 master-0 kubenswrapper[7518]: I0313 12:37:25.898522 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"9b912cc2fb7f1246b6e0fb7957cb5c167f818087772406214ca1bd3f180298fb"} Mar 13 12:37:25.898597 master-0 kubenswrapper[7518]: I0313 12:37:25.898536 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"958b1ab7ab943f0d9820d78ce8605298936c74cbbe3326599eac945aeec4ecce"} Mar 13 12:37:25.898597 master-0 kubenswrapper[7518]: I0313 12:37:25.898545 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"9c887f2b6cfcfcc1f3ea186daee81cbe3bce3c155cfd4e9bbac88f712c489339"} Mar 13 12:37:25.898597 master-0 kubenswrapper[7518]: I0313 12:37:25.898554 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"d97124951202d97d2b090945a6d5c9c5add42850ba499052ed07d95631932324"} Mar 13 12:37:25.898597 master-0 kubenswrapper[7518]: I0313 12:37:25.898561 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"a13f1b34007cf32fe962f7d50d2988f0f66eb3022aee3b3a767d84bde6caed30"} Mar 13 12:37:25.898597 master-0 kubenswrapper[7518]: I0313 12:37:25.898578 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"838f1203bfc2909f5be268d039e5903c4aada457bcd573b0395f4215bfc0c446"} Mar 13 12:37:25.898597 master-0 kubenswrapper[7518]: I0313 12:37:25.898587 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"f3be2171b1690f9bafcc889e55d83ff1a441baaed77d90117edebfc3db8ff2b9"} Mar 13 12:37:25.898597 master-0 kubenswrapper[7518]: I0313 12:37:25.898595 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"a3279720d4c802c349d222cf1b96260384211d9adc25c84b50972505c95ca211"} Mar 13 12:37:25.898597 master-0 kubenswrapper[7518]: I0313 12:37:25.898604 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"4a6cc550d523ce1bfed748c19240f1c4e3a9202060aead91cc14af91ea48f5ce"} Mar 13 12:37:25.909688 master-0 kubenswrapper[7518]: E0313 12:37:25.909629 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:37:25.911467 master-0 kubenswrapper[7518]: E0313 12:37:25.911434 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:37:25.912007 master-0 kubenswrapper[7518]: W0313 12:37:25.911949 7518 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 13 12:37:25.912007 master-0 kubenswrapper[7518]: E0313 12:37:25.911984 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:37:25.912883 master-0 kubenswrapper[7518]: E0313 12:37:25.912428 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:25.912883 master-0 kubenswrapper[7518]: E0313 12:37:25.912792 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:25.976736 master-0 kubenswrapper[7518]: I0313 12:37:25.976664 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:37:25.976736 master-0 kubenswrapper[7518]: I0313 12:37:25.976716 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:37:25.976983 master-0 kubenswrapper[7518]: I0313 12:37:25.976823 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:25.976983 master-0 kubenswrapper[7518]: I0313 12:37:25.976890 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:25.976983 master-0 kubenswrapper[7518]: I0313 12:37:25.976912 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:37:25.976983 master-0 kubenswrapper[7518]: I0313 12:37:25.976930 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:37:25.976983 master-0 kubenswrapper[7518]: I0313 12:37:25.976947 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:37:25.976983 master-0 kubenswrapper[7518]: I0313 12:37:25.976962 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:25.976983 master-0 kubenswrapper[7518]: I0313 12:37:25.976979 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:25.977239 master-0 kubenswrapper[7518]: I0313 12:37:25.977044 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:25.977239 master-0 kubenswrapper[7518]: I0313 12:37:25.977078 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:37:25.977239 master-0 kubenswrapper[7518]: I0313 12:37:25.977113 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:25.977239 master-0 kubenswrapper[7518]: I0313 12:37:25.977153 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:25.977239 master-0 kubenswrapper[7518]: I0313 12:37:25.977176 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:25.977239 master-0 kubenswrapper[7518]: I0313 12:37:25.977208 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:25.977239 master-0 kubenswrapper[7518]: I0313 12:37:25.977230 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:25.977455 master-0 kubenswrapper[7518]: I0313 12:37:25.977249 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:26.077565 master-0 kubenswrapper[7518]: I0313 12:37:26.077465 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:37:26.077565 master-0 kubenswrapper[7518]: I0313 12:37:26.077524 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:37:26.077565 master-0 kubenswrapper[7518]: I0313 12:37:26.077551 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:37:26.077565 master-0 kubenswrapper[7518]: I0313 12:37:26.077574 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:26.077565 master-0 kubenswrapper[7518]: I0313 12:37:26.077572 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077596 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077624 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077658 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077665 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077715 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077735 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077751 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077768 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077782 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077808 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077813 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077844 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077888 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077910 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077933 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077966 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.078001 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.077934 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.078056 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.078089 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.078104 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.078110 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.078153 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.078155 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:37:26.078102 master-0 kubenswrapper[7518]: I0313 12:37:26.078175 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:26.079383 master-0 kubenswrapper[7518]: I0313 12:37:26.078197 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:26.079383 master-0 kubenswrapper[7518]: I0313 12:37:26.078206 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:26.079383 master-0 kubenswrapper[7518]: I0313 12:37:26.078235 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:37:26.079383 master-0 kubenswrapper[7518]: I0313 12:37:26.078238 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:26.454193 master-0 kubenswrapper[7518]: I0313 12:37:26.454078 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:26.458510 master-0 kubenswrapper[7518]: I0313 12:37:26.458465 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:26.547368 master-0 kubenswrapper[7518]: I0313 12:37:26.547311 7518 apiserver.go:52] "Watching apiserver" Mar 13 12:37:26.556919 master-0 kubenswrapper[7518]: I0313 12:37:26.556834 7518 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 12:37:26.563262 master-0 kubenswrapper[7518]: I0313 12:37:26.563200 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk","openshift-network-node-identity/network-node-identity-qg8q5","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz","openshift-multus/multus-additional-cni-plugins-78p2k","openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht","openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt","openshift-etcd/etcd-master-0-master-0","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n","kube-system/bootstrap-kube-scheduler-master-0","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82","openshift-multus/network-metrics-daemon-r9lmb","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5","openshift-dns-operator/dns-operator-589895fbb7-mmwk7","openshift-ingress-operator/ingress-operator-677db989d6-ckl2j","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4","openshift-ovn-kubernetes/ovnkube-node-h8fwp","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-multus/multus-admission-controller-8d675b596-96gds","openshift-network-operator/network-operator-7c649bf6d4-kh6n9","openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4","openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj","openshift-network-diagnostics/network-check-target-pnwsc","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk","assisted-installer/assisted-installer-controller-bqsgz","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf","openshift-multus/multus-bnn7n","openshift-network-operator/iptables-alerter-qz6pg","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz","kube-system/bootstrap-kube-controller-manager-master-0","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc"] Mar 13 12:37:26.563502 master-0 kubenswrapper[7518]: I0313 12:37:26.563473 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:37:26.564495 master-0 kubenswrapper[7518]: I0313 12:37:26.564460 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:26.564609 master-0 kubenswrapper[7518]: I0313 12:37:26.564574 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:26.564656 master-0 kubenswrapper[7518]: I0313 12:37:26.564624 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:26.565006 master-0 kubenswrapper[7518]: I0313 12:37:26.564975 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:26.568459 master-0 kubenswrapper[7518]: I0313 12:37:26.568219 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 12:37:26.569956 master-0 kubenswrapper[7518]: I0313 12:37:26.569396 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 12:37:26.569956 master-0 kubenswrapper[7518]: I0313 12:37:26.569636 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 12:37:26.569956 master-0 kubenswrapper[7518]: I0313 12:37:26.569898 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 12:37:26.570203 master-0 kubenswrapper[7518]: I0313 12:37:26.570169 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 12:37:26.570259 master-0 kubenswrapper[7518]: I0313 12:37:26.570239 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 12:37:26.570371 master-0 kubenswrapper[7518]: I0313 12:37:26.570247 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 12:37:26.570547 master-0 kubenswrapper[7518]: I0313 12:37:26.570508 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:26.570547 master-0 kubenswrapper[7518]: I0313 12:37:26.570536 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 12:37:26.570649 master-0 kubenswrapper[7518]: I0313 12:37:26.570586 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:37:26.570694 master-0 kubenswrapper[7518]: I0313 12:37:26.570659 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 12:37:26.570933 master-0 kubenswrapper[7518]: I0313 12:37:26.570895 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 12:37:26.571211 master-0 kubenswrapper[7518]: I0313 12:37:26.571189 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 12:37:26.571372 master-0 kubenswrapper[7518]: I0313 12:37:26.571337 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:37:26.571569 master-0 kubenswrapper[7518]: I0313 12:37:26.571545 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 12:37:26.572683 master-0 kubenswrapper[7518]: I0313 12:37:26.572658 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 12:37:26.572830 master-0 kubenswrapper[7518]: I0313 12:37:26.572747 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:37:26.574710 master-0 kubenswrapper[7518]: I0313 12:37:26.574674 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:26.575661 master-0 kubenswrapper[7518]: I0313 12:37:26.575424 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 12:37:26.577063 master-0 kubenswrapper[7518]: I0313 12:37:26.577033 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 12:37:26.577340 master-0 kubenswrapper[7518]: I0313 12:37:26.577319 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 12:37:26.577546 master-0 kubenswrapper[7518]: I0313 12:37:26.577519 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 12:37:26.577684 master-0 kubenswrapper[7518]: I0313 12:37:26.577672 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:26.577761 master-0 kubenswrapper[7518]: I0313 12:37:26.577684 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:26.577761 master-0 kubenswrapper[7518]: I0313 12:37:26.577719 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:26.577896 master-0 kubenswrapper[7518]: I0313 12:37:26.577881 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:26.577968 master-0 kubenswrapper[7518]: I0313 12:37:26.577928 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:26.578064 master-0 kubenswrapper[7518]: I0313 12:37:26.578037 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:26.579279 master-0 kubenswrapper[7518]: I0313 12:37:26.578906 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 12:37:26.579279 master-0 kubenswrapper[7518]: I0313 12:37:26.579259 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 12:37:26.579712 master-0 kubenswrapper[7518]: I0313 12:37:26.579438 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 12:37:26.579712 master-0 kubenswrapper[7518]: I0313 12:37:26.579706 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.579732 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580088 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c62b15f-001a-4b64-b85f-348aefde5d1b-config\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580123 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-config\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580169 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f79578c-bbfb-4968-893a-730deb4c01f9-trusted-ca\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580191 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580198 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580263 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbk4f\" (UniqueName: \"kubernetes.io/projected/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-kube-api-access-zbk4f\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580299 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-serving-cert\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580332 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580365 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c62b15f-001a-4b64-b85f-348aefde5d1b-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580392 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f39d7f76-0075-44c3-9101-eb2607cb176a-service-ca\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580414 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-serving-cert\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580435 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9hks\" (UniqueName: \"kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-kube-api-access-f9hks\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580459 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-client\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580485 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580509 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-config\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:26.580615 master-0 kubenswrapper[7518]: I0313 12:37:26.580566 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c62b15f-001a-4b64-b85f-348aefde5d1b-config\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:26.581835 master-0 kubenswrapper[7518]: I0313 12:37:26.581408 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 12:37:26.581835 master-0 kubenswrapper[7518]: I0313 12:37:26.581424 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 12:37:26.581835 master-0 kubenswrapper[7518]: I0313 12:37:26.581621 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0da84bb7-e936-49a0-96b5-614a1305d6a4-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:26.581835 master-0 kubenswrapper[7518]: I0313 12:37:26.581663 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f39d7f76-0075-44c3-9101-eb2607cb176a-kube-api-access\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:26.581835 master-0 kubenswrapper[7518]: I0313 12:37:26.581690 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-bound-sa-token\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:26.581835 master-0 kubenswrapper[7518]: I0313 12:37:26.581718 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:26.581835 master-0 kubenswrapper[7518]: I0313 12:37:26.581756 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:26.581835 master-0 kubenswrapper[7518]: I0313 12:37:26.581792 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0803181-4e37-43fa-8ddc-9c76d3f61817-available-featuregates\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:26.581835 master-0 kubenswrapper[7518]: I0313 12:37:26.581814 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwkdj\" (UniqueName: \"kubernetes.io/projected/f0803181-4e37-43fa-8ddc-9c76d3f61817-kube-api-access-lwkdj\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:26.581835 master-0 kubenswrapper[7518]: I0313 12:37:26.581820 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0da84bb7-e936-49a0-96b5-614a1305d6a4-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:26.582341 master-0 kubenswrapper[7518]: I0313 12:37:26.581841 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:26.582341 master-0 kubenswrapper[7518]: I0313 12:37:26.581875 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-ca\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:26.582341 master-0 kubenswrapper[7518]: I0313 12:37:26.582020 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0803181-4e37-43fa-8ddc-9c76d3f61817-available-featuregates\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:26.582341 master-0 kubenswrapper[7518]: I0313 12:37:26.582020 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:26.582341 master-0 kubenswrapper[7518]: I0313 12:37:26.582075 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:26.582341 master-0 kubenswrapper[7518]: I0313 12:37:26.582105 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:26.582341 master-0 kubenswrapper[7518]: I0313 12:37:26.582220 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:26.582341 master-0 kubenswrapper[7518]: I0313 12:37:26.582255 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da84bb7-e936-49a0-96b5-614a1305d6a4-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:26.582341 master-0 kubenswrapper[7518]: I0313 12:37:26.582259 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-serving-cert\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:26.582341 master-0 kubenswrapper[7518]: I0313 12:37:26.582311 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:26.582710 master-0 kubenswrapper[7518]: I0313 12:37:26.582359 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 12:37:26.582710 master-0 kubenswrapper[7518]: I0313 12:37:26.582380 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c62b15f-001a-4b64-b85f-348aefde5d1b-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:26.582710 master-0 kubenswrapper[7518]: I0313 12:37:26.582427 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clrz7\" (UniqueName: \"kubernetes.io/projected/15b592d6-3c48-45d4-9172-d28632ae8995-kube-api-access-clrz7\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:26.582710 master-0 kubenswrapper[7518]: I0313 12:37:26.582509 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da84bb7-e936-49a0-96b5-614a1305d6a4-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:26.582710 master-0 kubenswrapper[7518]: I0313 12:37:26.582561 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0803181-4e37-43fa-8ddc-9c76d3f61817-serving-cert\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:26.582710 master-0 kubenswrapper[7518]: I0313 12:37:26.582612 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwjz5\" (UniqueName: \"kubernetes.io/projected/4e279dcc-35e2-4503-babc-978ac208c150-kube-api-access-bwjz5\") pod \"csi-snapshot-controller-operator-5685fbc7d-97wkd\" (UID: \"4e279dcc-35e2-4503-babc-978ac208c150\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd" Mar 13 12:37:26.583049 master-0 kubenswrapper[7518]: I0313 12:37:26.582735 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cf2v\" (UniqueName: \"kubernetes.io/projected/8c62b15f-001a-4b64-b85f-348aefde5d1b-kube-api-access-8cf2v\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:26.583049 master-0 kubenswrapper[7518]: I0313 12:37:26.582764 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 12:37:26.583049 master-0 kubenswrapper[7518]: I0313 12:37:26.582827 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0da84bb7-e936-49a0-96b5-614a1305d6a4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:26.583212 master-0 kubenswrapper[7518]: I0313 12:37:26.583079 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:26.583320 master-0 kubenswrapper[7518]: I0313 12:37:26.583291 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 12:37:26.586963 master-0 kubenswrapper[7518]: I0313 12:37:26.583941 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 12:37:26.586963 master-0 kubenswrapper[7518]: I0313 12:37:26.583889 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0803181-4e37-43fa-8ddc-9c76d3f61817-serving-cert\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:26.586963 master-0 kubenswrapper[7518]: I0313 12:37:26.584240 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 12:37:26.586963 master-0 kubenswrapper[7518]: I0313 12:37:26.585259 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 12:37:26.586963 master-0 kubenswrapper[7518]: I0313 12:37:26.586391 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 12:37:26.586963 master-0 kubenswrapper[7518]: I0313 12:37:26.586505 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 12:37:26.586963 master-0 kubenswrapper[7518]: I0313 12:37:26.586896 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 12:37:26.592307 master-0 kubenswrapper[7518]: I0313 12:37:26.587119 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 12:37:26.592307 master-0 kubenswrapper[7518]: I0313 12:37:26.590077 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 12:37:26.592307 master-0 kubenswrapper[7518]: I0313 12:37:26.590243 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 12:37:26.592307 master-0 kubenswrapper[7518]: I0313 12:37:26.590318 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 12:37:26.592307 master-0 kubenswrapper[7518]: I0313 12:37:26.590806 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f79578c-bbfb-4968-893a-730deb4c01f9-trusted-ca\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:26.592307 master-0 kubenswrapper[7518]: I0313 12:37:26.591381 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 12:37:26.592307 master-0 kubenswrapper[7518]: I0313 12:37:26.591394 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 12:37:26.592307 master-0 kubenswrapper[7518]: I0313 12:37:26.591557 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 12:37:26.592307 master-0 kubenswrapper[7518]: I0313 12:37:26.591710 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 12:37:26.592307 master-0 kubenswrapper[7518]: I0313 12:37:26.591996 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-config\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:26.592307 master-0 kubenswrapper[7518]: I0313 12:37:26.592008 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-config\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:26.593245 master-0 kubenswrapper[7518]: I0313 12:37:26.592879 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 12:37:26.593245 master-0 kubenswrapper[7518]: I0313 12:37:26.593240 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 12:37:26.593499 master-0 kubenswrapper[7518]: I0313 12:37:26.593460 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:37:26.593667 master-0 kubenswrapper[7518]: I0313 12:37:26.593466 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 12:37:26.593730 master-0 kubenswrapper[7518]: I0313 12:37:26.593705 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 12:37:26.593879 master-0 kubenswrapper[7518]: I0313 12:37:26.593845 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 12:37:26.593976 master-0 kubenswrapper[7518]: I0313 12:37:26.593959 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 12:37:26.594287 master-0 kubenswrapper[7518]: I0313 12:37:26.594266 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 12:37:26.594449 master-0 kubenswrapper[7518]: I0313 12:37:26.594418 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 12:37:26.594506 master-0 kubenswrapper[7518]: I0313 12:37:26.594496 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 12:37:26.594799 master-0 kubenswrapper[7518]: I0313 12:37:26.594780 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.595689 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-serving-cert\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.595824 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-client\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.595366 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.596048 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.596105 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.595423 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.596348 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f39d7f76-0075-44c3-9101-eb2607cb176a-service-ca\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.596420 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.596901 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.597051 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.597119 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.597359 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.597468 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.597597 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.597643 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.597798 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.597830 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.597964 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 12:37:26.599182 master-0 kubenswrapper[7518]: I0313 12:37:26.598374 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 12:37:26.600126 master-0 kubenswrapper[7518]: I0313 12:37:26.600097 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 12:37:26.601172 master-0 kubenswrapper[7518]: I0313 12:37:26.600370 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 12:37:26.601172 master-0 kubenswrapper[7518]: I0313 12:37:26.600517 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 12:37:26.601172 master-0 kubenswrapper[7518]: I0313 12:37:26.600631 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 12:37:26.601172 master-0 kubenswrapper[7518]: I0313 12:37:26.600637 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 12:37:26.601172 master-0 kubenswrapper[7518]: I0313 12:37:26.597600 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 12:37:26.601354 master-0 kubenswrapper[7518]: I0313 12:37:26.601329 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 12:37:26.602451 master-0 kubenswrapper[7518]: I0313 12:37:26.601398 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 12:37:26.602451 master-0 kubenswrapper[7518]: I0313 12:37:26.601808 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 12:37:26.602451 master-0 kubenswrapper[7518]: I0313 12:37:26.602017 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 12:37:26.602451 master-0 kubenswrapper[7518]: I0313 12:37:26.595516 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 12:37:26.602944 master-0 kubenswrapper[7518]: I0313 12:37:26.602555 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 12:37:26.602944 master-0 kubenswrapper[7518]: I0313 12:37:26.602590 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 12:37:26.602944 master-0 kubenswrapper[7518]: I0313 12:37:26.602568 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-ca\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:26.602944 master-0 kubenswrapper[7518]: I0313 12:37:26.602828 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:26.602944 master-0 kubenswrapper[7518]: I0313 12:37:26.602864 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 12:37:26.603394 master-0 kubenswrapper[7518]: I0313 12:37:26.603332 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 12:37:26.603951 master-0 kubenswrapper[7518]: I0313 12:37:26.603733 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 12:37:26.604113 master-0 kubenswrapper[7518]: I0313 12:37:26.604092 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 12:37:26.604181 master-0 kubenswrapper[7518]: I0313 12:37:26.604115 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 12:37:26.604455 master-0 kubenswrapper[7518]: I0313 12:37:26.604347 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 12:37:26.604455 master-0 kubenswrapper[7518]: I0313 12:37:26.604436 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 12:37:26.605331 master-0 kubenswrapper[7518]: I0313 12:37:26.605302 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 12:37:26.610766 master-0 kubenswrapper[7518]: I0313 12:37:26.610736 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwkdj\" (UniqueName: \"kubernetes.io/projected/f0803181-4e37-43fa-8ddc-9c76d3f61817-kube-api-access-lwkdj\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:26.612329 master-0 kubenswrapper[7518]: I0313 12:37:26.612297 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f39d7f76-0075-44c3-9101-eb2607cb176a-kube-api-access\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:26.613985 master-0 kubenswrapper[7518]: I0313 12:37:26.613960 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-bound-sa-token\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:26.616877 master-0 kubenswrapper[7518]: I0313 12:37:26.616851 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 12:37:26.617768 master-0 kubenswrapper[7518]: I0313 12:37:26.617745 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 12:37:26.617954 master-0 kubenswrapper[7518]: I0313 12:37:26.617881 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 12:37:26.618067 master-0 kubenswrapper[7518]: I0313 12:37:26.618048 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 12:37:26.625392 master-0 kubenswrapper[7518]: I0313 12:37:26.625354 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9hks\" (UniqueName: \"kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-kube-api-access-f9hks\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:26.641770 master-0 kubenswrapper[7518]: I0313 12:37:26.641723 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:37:26.661677 master-0 kubenswrapper[7518]: I0313 12:37:26.661619 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbk4f\" (UniqueName: \"kubernetes.io/projected/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-kube-api-access-zbk4f\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:26.666228 master-0 kubenswrapper[7518]: I0313 12:37:26.666200 7518 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 13 12:37:26.681849 master-0 kubenswrapper[7518]: I0313 12:37:26.681785 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:37:26.683500 master-0 kubenswrapper[7518]: I0313 12:37:26.683461 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.683566 master-0 kubenswrapper[7518]: I0313 12:37:26.683508 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q2qc\" (UniqueName: \"kubernetes.io/projected/f5775266-5e58-44ed-81cb-dfe3faf38add-kube-api-access-9q2qc\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:26.683566 master-0 kubenswrapper[7518]: I0313 12:37:26.683548 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-multus-certs\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.683634 master-0 kubenswrapper[7518]: I0313 12:37:26.683570 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-os-release\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.683634 master-0 kubenswrapper[7518]: I0313 12:37:26.683597 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:26.683634 master-0 kubenswrapper[7518]: I0313 12:37:26.683621 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-slash\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.683709 master-0 kubenswrapper[7518]: I0313 12:37:26.683639 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j5fc\" (UniqueName: \"kubernetes.io/projected/d6226325-c4d9-497e-8d19-a71adc66c5ac-kube-api-access-4j5fc\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.683709 master-0 kubenswrapper[7518]: I0313 12:37:26.683658 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-system-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.683709 master-0 kubenswrapper[7518]: I0313 12:37:26.683678 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-k8s-cni-cncf-io\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.683709 master-0 kubenswrapper[7518]: I0313 12:37:26.683700 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:26.683865 master-0 kubenswrapper[7518]: I0313 12:37:26.683724 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sk7j\" (UniqueName: \"kubernetes.io/projected/604456a0-4997-43bc-87ef-283a002111fe-kube-api-access-8sk7j\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:26.683865 master-0 kubenswrapper[7518]: I0313 12:37:26.683749 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-systemd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.684317 master-0 kubenswrapper[7518]: I0313 12:37:26.684236 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:26.684402 master-0 kubenswrapper[7518]: I0313 12:37:26.683765 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.684674 master-0 kubenswrapper[7518]: I0313 12:37:26.684407 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4dd0fc2f-f2ee-4447-a747-04a178288cf0-metrics-tls\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:37:26.684674 master-0 kubenswrapper[7518]: I0313 12:37:26.684460 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-ovnkube-identity-cm\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:37:26.684674 master-0 kubenswrapper[7518]: I0313 12:37:26.684534 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x8kz\" (UniqueName: \"kubernetes.io/projected/3d653e1a-5903-4a02-9357-df145f028c0d-kube-api-access-6x8kz\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:26.684674 master-0 kubenswrapper[7518]: I0313 12:37:26.684624 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/08e2bc8e-ca80-454c-81dc-211d122e32e0-host-slash\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:26.684814 master-0 kubenswrapper[7518]: I0313 12:37:26.684724 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-node-log\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.684814 master-0 kubenswrapper[7518]: I0313 12:37:26.684796 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4dd0fc2f-f2ee-4447-a747-04a178288cf0-metrics-tls\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:37:26.685673 master-0 kubenswrapper[7518]: I0313 12:37:26.684761 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:26.685731 master-0 kubenswrapper[7518]: I0313 12:37:26.685694 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlvjp\" (UniqueName: \"kubernetes.io/projected/5ae41cff-0949-47f8-aae9-ae133191476d-kube-api-access-mlvjp\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:37:26.686253 master-0 kubenswrapper[7518]: I0313 12:37:26.685906 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-etc-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.686253 master-0 kubenswrapper[7518]: I0313 12:37:26.686246 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-bin\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.686352 master-0 kubenswrapper[7518]: I0313 12:37:26.686294 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:26.686352 master-0 kubenswrapper[7518]: I0313 12:37:26.686321 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-config\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:26.686352 master-0 kubenswrapper[7518]: I0313 12:37:26.686344 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5ae41cff-0949-47f8-aae9-ae133191476d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:37:26.686611 master-0 kubenswrapper[7518]: I0313 12:37:26.686559 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-config\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:26.686611 master-0 kubenswrapper[7518]: I0313 12:37:26.686567 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5ae41cff-0949-47f8-aae9-ae133191476d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:37:26.686707 master-0 kubenswrapper[7518]: I0313 12:37:26.686368 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-log-socket\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.686707 master-0 kubenswrapper[7518]: I0313 12:37:26.686658 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3020d236-03e0-4916-97dd-f1085632ca43-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:26.686707 master-0 kubenswrapper[7518]: I0313 12:37:26.686696 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-conf-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.686893 master-0 kubenswrapper[7518]: I0313 12:37:26.686721 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgbvr\" (UniqueName: \"kubernetes.io/projected/ce3a655a-0684-4bc5-ac36-5878507537c7-kube-api-access-vgbvr\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.686893 master-0 kubenswrapper[7518]: I0313 12:37:26.686773 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-config\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:26.686893 master-0 kubenswrapper[7518]: I0313 12:37:26.686800 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg8tz\" (UniqueName: \"kubernetes.io/projected/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-kube-api-access-vg8tz\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:26.686893 master-0 kubenswrapper[7518]: I0313 12:37:26.686821 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c24hd\" (UniqueName: \"kubernetes.io/projected/3020d236-03e0-4916-97dd-f1085632ca43-kube-api-access-c24hd\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:26.686893 master-0 kubenswrapper[7518]: I0313 12:37:26.686837 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:26.686893 master-0 kubenswrapper[7518]: I0313 12:37:26.686864 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btf8q\" (UniqueName: \"kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q\") pod \"network-check-target-pnwsc\" (UID: \"269aedfd-4274-4998-bd0d-603b67257666\") " pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:26.686893 master-0 kubenswrapper[7518]: I0313 12:37:26.686880 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034aaf8e-95df-4171-bae4-e7abe58d15f7-serving-cert\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:26.687126 master-0 kubenswrapper[7518]: I0313 12:37:26.686922 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/887d261f-d07f-4ef0-a230-6568f47acf4d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:26.687126 master-0 kubenswrapper[7518]: I0313 12:37:26.686938 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:26.687126 master-0 kubenswrapper[7518]: I0313 12:37:26.686954 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-cnibin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.687126 master-0 kubenswrapper[7518]: I0313 12:37:26.686969 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-netns\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.687126 master-0 kubenswrapper[7518]: I0313 12:37:26.686983 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-netns\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.687126 master-0 kubenswrapper[7518]: I0313 12:37:26.686998 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-env-overrides\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.687126 master-0 kubenswrapper[7518]: I0313 12:37:26.687014 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034aaf8e-95df-4171-bae4-e7abe58d15f7-config\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:26.687126 master-0 kubenswrapper[7518]: I0313 12:37:26.687045 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5nb7\" (UniqueName: \"kubernetes.io/projected/d3d998ee-b26f-4e30-83bc-f94f8c68060a-kube-api-access-x5nb7\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:26.687126 master-0 kubenswrapper[7518]: I0313 12:37:26.687063 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:26.687126 master-0 kubenswrapper[7518]: I0313 12:37:26.687081 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:26.687126 master-0 kubenswrapper[7518]: I0313 12:37:26.687096 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:26.687126 master-0 kubenswrapper[7518]: I0313 12:37:26.687122 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-netd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.687463 master-0 kubenswrapper[7518]: I0313 12:37:26.687162 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:37:26.687463 master-0 kubenswrapper[7518]: I0313 12:37:26.687198 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/604456a0-4997-43bc-87ef-283a002111fe-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:26.687463 master-0 kubenswrapper[7518]: I0313 12:37:26.687216 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:26.687463 master-0 kubenswrapper[7518]: I0313 12:37:26.687232 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-hostroot\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.687463 master-0 kubenswrapper[7518]: I0313 12:37:26.687253 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-etc-kubernetes\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.687463 master-0 kubenswrapper[7518]: I0313 12:37:26.687412 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-env-overrides\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.687463 master-0 kubenswrapper[7518]: I0313 12:37:26.687433 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3020d236-03e0-4916-97dd-f1085632ca43-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:26.687463 master-0 kubenswrapper[7518]: I0313 12:37:26.687441 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-config\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:26.687663 master-0 kubenswrapper[7518]: I0313 12:37:26.687554 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034aaf8e-95df-4171-bae4-e7abe58d15f7-serving-cert\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:26.687663 master-0 kubenswrapper[7518]: I0313 12:37:26.687584 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034aaf8e-95df-4171-bae4-e7abe58d15f7-config\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:26.687663 master-0 kubenswrapper[7518]: I0313 12:37:26.687587 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcf05594-4c10-4b54-a47c-d55e323f1f87-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:26.688278 master-0 kubenswrapper[7518]: I0313 12:37:26.687800 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcf05594-4c10-4b54-a47c-d55e323f1f87-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:26.688278 master-0 kubenswrapper[7518]: I0313 12:37:26.687835 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-whereabouts-configmap\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.688278 master-0 kubenswrapper[7518]: I0313 12:37:26.687930 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:26.688278 master-0 kubenswrapper[7518]: I0313 12:37:26.687937 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/887d261f-d07f-4ef0-a230-6568f47acf4d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:26.688278 master-0 kubenswrapper[7518]: I0313 12:37:26.688004 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-whereabouts-configmap\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.688278 master-0 kubenswrapper[7518]: I0313 12:37:26.688032 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:26.688278 master-0 kubenswrapper[7518]: I0313 12:37:26.688096 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:26.688278 master-0 kubenswrapper[7518]: I0313 12:37:26.688146 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-kubelet\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.688278 master-0 kubenswrapper[7518]: I0313 12:37:26.688171 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-socket-dir-parent\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.688278 master-0 kubenswrapper[7518]: I0313 12:37:26.688198 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9psfn\" (UniqueName: \"kubernetes.io/projected/4c0b18db-06ad-4d58-a353-f6fd96309dea-kube-api-access-9psfn\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:26.688278 master-0 kubenswrapper[7518]: I0313 12:37:26.688199 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:37:26.688278 master-0 kubenswrapper[7518]: I0313 12:37:26.688246 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-ovn\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.688671 master-0 kubenswrapper[7518]: I0313 12:37:26.688309 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.688671 master-0 kubenswrapper[7518]: I0313 12:37:26.688365 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-daemon-config\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.688671 master-0 kubenswrapper[7518]: I0313 12:37:26.688395 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:37:26.688671 master-0 kubenswrapper[7518]: I0313 12:37:26.688413 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-systemd-units\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.688671 master-0 kubenswrapper[7518]: I0313 12:37:26.688420 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/604456a0-4997-43bc-87ef-283a002111fe-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:26.688671 master-0 kubenswrapper[7518]: I0313 12:37:26.688471 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:26.688671 master-0 kubenswrapper[7518]: I0313 12:37:26.688530 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8hcd\" (UniqueName: \"kubernetes.io/projected/d5a19b80-d488-46d3-a4a8-0b80361077e1-kube-api-access-p8hcd\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:26.688671 master-0 kubenswrapper[7518]: I0313 12:37:26.688560 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnw9d\" (UniqueName: \"kubernetes.io/projected/4dd0fc2f-f2ee-4447-a747-04a178288cf0-kube-api-access-fnw9d\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:37:26.688671 master-0 kubenswrapper[7518]: I0313 12:37:26.688570 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-daemon-config\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.688671 master-0 kubenswrapper[7518]: I0313 12:37:26.688605 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:37:26.688671 master-0 kubenswrapper[7518]: I0313 12:37:26.688617 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.688671 master-0 kubenswrapper[7518]: I0313 12:37:26.688647 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brzd4\" (UniqueName: \"kubernetes.io/projected/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-kube-api-access-brzd4\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:37:26.688671 master-0 kubenswrapper[7518]: I0313 12:37:26.688675 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:26.689071 master-0 kubenswrapper[7518]: I0313 12:37:26.688719 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:26.689071 master-0 kubenswrapper[7518]: I0313 12:37:26.688787 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-script-lib\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.689071 master-0 kubenswrapper[7518]: I0313 12:37:26.688819 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5775266-5e58-44ed-81cb-dfe3faf38add-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:26.689071 master-0 kubenswrapper[7518]: I0313 12:37:26.688833 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.689071 master-0 kubenswrapper[7518]: I0313 12:37:26.688868 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-bin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.689071 master-0 kubenswrapper[7518]: I0313 12:37:26.688893 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-kubelet\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.689071 master-0 kubenswrapper[7518]: I0313 12:37:26.688916 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:26.689071 master-0 kubenswrapper[7518]: I0313 12:37:26.689029 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:26.689071 master-0 kubenswrapper[7518]: I0313 12:37:26.689038 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5775266-5e58-44ed-81cb-dfe3faf38add-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:26.689071 master-0 kubenswrapper[7518]: E0313 12:37:26.689047 7518 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:26.689071 master-0 kubenswrapper[7518]: I0313 12:37:26.689060 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbtjs\" (UniqueName: \"kubernetes.io/projected/29b6aa89-0416-4595-9deb-10b290521d86-kube-api-access-cbtjs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: E0313 12:37:26.689109 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls podName:2f79578c-bbfb-4968-893a-730deb4c01f9 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:27.189092488 +0000 UTC m=+1.822161675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls") pod "ingress-operator-677db989d6-ckl2j" (UID: "2f79578c-bbfb-4968-893a-730deb4c01f9") : secret "metrics-tls" not found Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689132 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xstz5\" (UniqueName: \"kubernetes.io/projected/08e2bc8e-ca80-454c-81dc-211d122e32e0-kube-api-access-xstz5\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689209 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2p67\" (UniqueName: \"kubernetes.io/projected/13f32761-b386-4f93-b3c0-b16ea53d338a-kube-api-access-m2p67\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: E0313 12:37:26.689244 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: E0313 12:37:26.689280 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert podName:10944f9c-8ce9-44e6-9c36-a0ea19d8cae3 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:27.18926917 +0000 UTC m=+1.822338357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert") pod "catalog-operator-7d9c49f57b-tlnkd" (UID: "10944f9c-8ce9-44e6-9c36-a0ea19d8cae3") : secret "catalog-operator-serving-cert" not found Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689311 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-config\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689336 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-webhook-cert\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689382 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmfxj\" (UniqueName: \"kubernetes.io/projected/887d261f-d07f-4ef0-a230-6568f47acf4d-kube-api-access-pmfxj\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689405 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-multus\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689430 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689465 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-var-lib-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689481 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/887d261f-d07f-4ef0-a230-6568f47acf4d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689498 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4dd0fc2f-f2ee-4447-a747-04a178288cf0-host-etc-kube\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689517 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w5r2\" (UniqueName: \"kubernetes.io/projected/034aaf8e-95df-4171-bae4-e7abe58d15f7-kube-api-access-5w5r2\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689554 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km69t\" (UniqueName: \"kubernetes.io/projected/152689b1-5875-4a9a-bb25-bee858523168-kube-api-access-km69t\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689639 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/887d261f-d07f-4ef0-a230-6568f47acf4d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689706 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5775266-5e58-44ed-81cb-dfe3faf38add-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689730 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-serving-cert\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689748 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-env-overrides\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:37:26.689822 master-0 kubenswrapper[7518]: I0313 12:37:26.689714 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-config\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.689854 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.689865 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5775266-5e58-44ed-81cb-dfe3faf38add-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.689886 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-serving-cert\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.689895 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-os-release\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.689921 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-cni-binary-copy\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.690073 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovn-node-metrics-cert\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.690089 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-cni-binary-copy\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.690147 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-system-cni-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.690178 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-binary-copy\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.690208 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.690266 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4hd6\" (UniqueName: \"kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-kube-api-access-j4hd6\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.690361 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: E0313 12:37:26.690374 7518 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.690425 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-binary-copy\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: E0313 12:37:26.690446 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert podName:f39d7f76-0075-44c3-9101-eb2607cb176a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:27.190425537 +0000 UTC m=+1.823494774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert") pod "cluster-version-operator-745944c6b7-mbjxt" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.690396 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-tuning-conf-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.690531 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.690597 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/08e2bc8e-ca80-454c-81dc-211d122e32e0-iptables-alerter-script\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.690628 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.690683 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4tnq\" (UniqueName: \"kubernetes.io/projected/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-kube-api-access-m4tnq\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.690708 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-cnibin\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.690949 master-0 kubenswrapper[7518]: I0313 12:37:26.690803 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:26.692030 master-0 kubenswrapper[7518]: I0313 12:37:26.691700 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:26.702497 master-0 kubenswrapper[7518]: I0313 12:37:26.702461 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwjz5\" (UniqueName: \"kubernetes.io/projected/4e279dcc-35e2-4503-babc-978ac208c150-kube-api-access-bwjz5\") pod \"csi-snapshot-controller-operator-5685fbc7d-97wkd\" (UID: \"4e279dcc-35e2-4503-babc-978ac208c150\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd" Mar 13 12:37:26.728005 master-0 kubenswrapper[7518]: I0313 12:37:26.727878 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cf2v\" (UniqueName: \"kubernetes.io/projected/8c62b15f-001a-4b64-b85f-348aefde5d1b-kube-api-access-8cf2v\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:37:26.739572 master-0 kubenswrapper[7518]: I0313 12:37:26.739496 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0da84bb7-e936-49a0-96b5-614a1305d6a4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:37:26.769541 master-0 kubenswrapper[7518]: I0313 12:37:26.769472 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clrz7\" (UniqueName: \"kubernetes.io/projected/15b592d6-3c48-45d4-9172-d28632ae8995-kube-api-access-clrz7\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:37:26.769762 master-0 kubenswrapper[7518]: I0313 12:37:26.769730 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 12:37:26.770364 master-0 kubenswrapper[7518]: I0313 12:37:26.770321 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-env-overrides\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:37:26.789831 master-0 kubenswrapper[7518]: I0313 12:37:26.789769 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 12:37:26.791594 master-0 kubenswrapper[7518]: I0313 12:37:26.791533 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-system-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.791684 master-0 kubenswrapper[7518]: I0313 12:37:26.791621 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-k8s-cni-cncf-io\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.791761 master-0 kubenswrapper[7518]: I0313 12:37:26.791713 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-k8s-cni-cncf-io\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.791837 master-0 kubenswrapper[7518]: I0313 12:37:26.791750 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-system-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.791837 master-0 kubenswrapper[7518]: I0313 12:37:26.791795 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:26.791837 master-0 kubenswrapper[7518]: I0313 12:37:26.791831 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-slash\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.791981 master-0 kubenswrapper[7518]: I0313 12:37:26.791862 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-systemd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.791981 master-0 kubenswrapper[7518]: I0313 12:37:26.791888 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.791981 master-0 kubenswrapper[7518]: I0313 12:37:26.791931 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/08e2bc8e-ca80-454c-81dc-211d122e32e0-host-slash\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:26.791981 master-0 kubenswrapper[7518]: I0313 12:37:26.791950 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-node-log\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.791981 master-0 kubenswrapper[7518]: I0313 12:37:26.791967 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:26.792158 master-0 kubenswrapper[7518]: I0313 12:37:26.791989 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:26.792158 master-0 kubenswrapper[7518]: I0313 12:37:26.792015 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-etc-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.792158 master-0 kubenswrapper[7518]: E0313 12:37:26.792021 7518 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:37:26.792158 master-0 kubenswrapper[7518]: I0313 12:37:26.792039 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-bin\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.792158 master-0 kubenswrapper[7518]: I0313 12:37:26.792062 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-log-socket\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.792158 master-0 kubenswrapper[7518]: E0313 12:37:26.792080 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics podName:d3d998ee-b26f-4e30-83bc-f94f8c68060a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:27.29205954 +0000 UTC m=+1.925128727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7qhr4" (UID: "d3d998ee-b26f-4e30-83bc-f94f8c68060a") : secret "marketplace-operator-metrics" not found Mar 13 12:37:26.792158 master-0 kubenswrapper[7518]: I0313 12:37:26.792094 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-log-socket\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.792158 master-0 kubenswrapper[7518]: I0313 12:37:26.792099 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-conf-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.792158 master-0 kubenswrapper[7518]: I0313 12:37:26.792123 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-systemd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.792555 master-0 kubenswrapper[7518]: I0313 12:37:26.792181 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/08e2bc8e-ca80-454c-81dc-211d122e32e0-host-slash\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:26.792555 master-0 kubenswrapper[7518]: I0313 12:37:26.792187 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btf8q\" (UniqueName: \"kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q\") pod \"network-check-target-pnwsc\" (UID: \"269aedfd-4274-4998-bd0d-603b67257666\") " pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:26.792555 master-0 kubenswrapper[7518]: I0313 12:37:26.791927 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-slash\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.792555 master-0 kubenswrapper[7518]: I0313 12:37:26.792215 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-netns\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.792555 master-0 kubenswrapper[7518]: I0313 12:37:26.792232 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-node-log\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.792555 master-0 kubenswrapper[7518]: I0313 12:37:26.792262 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-conf-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.792555 master-0 kubenswrapper[7518]: E0313 12:37:26.792292 7518 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:37:26.792555 master-0 kubenswrapper[7518]: E0313 12:37:26.792314 7518 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:37:26.792555 master-0 kubenswrapper[7518]: I0313 12:37:26.792405 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.792555 master-0 kubenswrapper[7518]: I0313 12:37:26.792431 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-bin\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.792555 master-0 kubenswrapper[7518]: E0313 12:37:26.792465 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs podName:4c0b18db-06ad-4d58-a353-f6fd96309dea nodeName:}" failed. No retries permitted until 2026-03-13 12:37:27.292454265 +0000 UTC m=+1.925523452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs") pod "multus-admission-controller-8d675b596-96gds" (UID: "4c0b18db-06ad-4d58-a353-f6fd96309dea") : secret "multus-admission-controller-secret" not found Mar 13 12:37:26.792555 master-0 kubenswrapper[7518]: I0313 12:37:26.792483 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-etc-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: E0313 12:37:26.792579 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:27.292567407 +0000 UTC m=+1.925636594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "node-tuning-operator-tls" not found Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: I0313 12:37:26.792625 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: I0313 12:37:26.792665 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-cnibin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: I0313 12:37:26.792685 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-netns\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: I0313 12:37:26.792699 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: I0313 12:37:26.792735 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-netns\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: E0313 12:37:26.792744 7518 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: E0313 12:37:26.792784 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls podName:13f32761-b386-4f93-b3c0-b16ea53d338a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:27.29276101 +0000 UTC m=+1.925830197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls") pod "dns-operator-589895fbb7-mmwk7" (UID: "13f32761-b386-4f93-b3c0-b16ea53d338a") : secret "metrics-tls" not found Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: I0313 12:37:26.792791 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-netns\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: I0313 12:37:26.792799 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: I0313 12:37:26.792816 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-netd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: I0313 12:37:26.792834 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: E0313 12:37:26.792837 7518 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: E0313 12:37:26.792860 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:27.292854531 +0000 UTC m=+1.925923718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: I0313 12:37:26.792871 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-cnibin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: E0313 12:37:26.792895 7518 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: I0313 12:37:26.792896 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-hostroot\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: E0313 12:37:26.792912 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls podName:bcf05594-4c10-4b54-a47c-d55e323f1f87 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:27.292906472 +0000 UTC m=+1.925975659 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-q287n" (UID: "bcf05594-4c10-4b54-a47c-d55e323f1f87") : secret "image-registry-operator-tls" not found Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: I0313 12:37:26.792924 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-etc-kubernetes\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: I0313 12:37:26.792925 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-hostroot\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: I0313 12:37:26.792940 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-socket-dir-parent\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.792941 master-0 kubenswrapper[7518]: I0313 12:37:26.792965 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: E0313 12:37:26.792970 7518 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.792984 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-netd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: E0313 12:37:26.792990 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs podName:29b6aa89-0416-4595-9deb-10b290521d86 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:27.292985453 +0000 UTC m=+1.926054640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs") pod "network-metrics-daemon-r9lmb" (UID: "29b6aa89-0416-4595-9deb-10b290521d86") : secret "metrics-daemon-secret" not found Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793008 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-etc-kubernetes\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793008 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-kubelet\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793058 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-kubelet\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793065 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-ovn\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793083 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793098 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-socket-dir-parent\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793102 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-systemd-units\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793115 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-systemd-units\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: E0313 12:37:26.793168 7518 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793171 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: E0313 12:37:26.793189 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls podName:604456a0-4997-43bc-87ef-283a002111fe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:27.293183296 +0000 UTC m=+1.926252483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-zwtdz" (UID: "604456a0-4997-43bc-87ef-283a002111fe") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793208 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-bin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: E0313 12:37:26.793221 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793223 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-kubelet\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: E0313 12:37:26.793248 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert podName:d5a19b80-d488-46d3-a4a8-0b80361077e1 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:27.293237347 +0000 UTC m=+1.926306534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert") pod "olm-operator-d64cfc9db-rfqb9" (UID: "d5a19b80-d488-46d3-a4a8-0b80361077e1") : secret "olm-operator-serving-cert" not found Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793254 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-kubelet\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793279 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-ovn\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793301 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-bin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793341 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-multus\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793368 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-var-lib-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793385 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4dd0fc2f-f2ee-4447-a747-04a178288cf0-host-etc-kube\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793401 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793407 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793427 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793427 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-os-release\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793462 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-system-cni-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793500 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-tuning-conf-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793517 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-cnibin\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793528 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-os-release\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793534 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793565 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.793556 master-0 kubenswrapper[7518]: I0313 12:37:26.793564 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4dd0fc2f-f2ee-4447-a747-04a178288cf0-host-etc-kube\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:37:26.794957 master-0 kubenswrapper[7518]: I0313 12:37:26.793592 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-multus-certs\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.794957 master-0 kubenswrapper[7518]: I0313 12:37:26.793597 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.794957 master-0 kubenswrapper[7518]: I0313 12:37:26.793613 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-os-release\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.794957 master-0 kubenswrapper[7518]: I0313 12:37:26.793620 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-system-cni-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.794957 master-0 kubenswrapper[7518]: I0313 12:37:26.793656 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-multus\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.794957 master-0 kubenswrapper[7518]: I0313 12:37:26.793659 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-tuning-conf-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.794957 master-0 kubenswrapper[7518]: I0313 12:37:26.793677 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-var-lib-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.794957 master-0 kubenswrapper[7518]: E0313 12:37:26.793700 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:37:26.794957 master-0 kubenswrapper[7518]: E0313 12:37:26.793724 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert podName:3d653e1a-5903-4a02-9357-df145f028c0d nodeName:}" failed. No retries permitted until 2026-03-13 12:37:27.293715623 +0000 UTC m=+1.926784810 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-669qk" (UID: "3d653e1a-5903-4a02-9357-df145f028c0d") : secret "package-server-manager-serving-cert" not found Mar 13 12:37:26.794957 master-0 kubenswrapper[7518]: I0313 12:37:26.793698 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-cnibin\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.794957 master-0 kubenswrapper[7518]: I0313 12:37:26.793744 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-multus-certs\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:26.794957 master-0 kubenswrapper[7518]: I0313 12:37:26.793762 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-os-release\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:26.800032 master-0 kubenswrapper[7518]: I0313 12:37:26.799991 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-webhook-cert\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:37:26.809339 master-0 kubenswrapper[7518]: I0313 12:37:26.809308 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 12:37:26.828879 master-0 kubenswrapper[7518]: I0313 12:37:26.828828 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 12:37:26.884397 master-0 kubenswrapper[7518]: I0313 12:37:26.881968 7518 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:37:26.884397 master-0 kubenswrapper[7518]: I0313 12:37:26.883507 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 12:37:26.884397 master-0 kubenswrapper[7518]: I0313 12:37:26.883808 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 12:37:26.885520 master-0 kubenswrapper[7518]: I0313 12:37:26.885486 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-ovnkube-identity-cm\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:37:26.889581 master-0 kubenswrapper[7518]: I0313 12:37:26.889527 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 12:37:26.889816 master-0 kubenswrapper[7518]: I0313 12:37:26.889772 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-script-lib\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.890994 master-0 kubenswrapper[7518]: I0313 12:37:26.890952 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovn-node-metrics-cert\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:26.909015 master-0 kubenswrapper[7518]: I0313 12:37:26.908947 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 12:37:26.911902 master-0 kubenswrapper[7518]: I0313 12:37:26.911863 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/08e2bc8e-ca80-454c-81dc-211d122e32e0-iptables-alerter-script\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:26.955772 master-0 kubenswrapper[7518]: E0313 12:37:26.955677 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:26.974870 master-0 kubenswrapper[7518]: W0313 12:37:26.974837 7518 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 13 12:37:26.975396 master-0 kubenswrapper[7518]: E0313 12:37:26.974887 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:37:26.998942 master-0 kubenswrapper[7518]: E0313 12:37:26.998799 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:27.021404 master-0 kubenswrapper[7518]: E0313 12:37:27.021362 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:37:27.034832 master-0 kubenswrapper[7518]: E0313 12:37:27.034782 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:37:27.062829 master-0 kubenswrapper[7518]: I0313 12:37:27.062798 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j5fc\" (UniqueName: \"kubernetes.io/projected/d6226325-c4d9-497e-8d19-a71adc66c5ac-kube-api-access-4j5fc\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:27.082717 master-0 kubenswrapper[7518]: I0313 12:37:27.082655 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q2qc\" (UniqueName: \"kubernetes.io/projected/f5775266-5e58-44ed-81cb-dfe3faf38add-kube-api-access-9q2qc\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:37:27.102347 master-0 kubenswrapper[7518]: I0313 12:37:27.102298 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sk7j\" (UniqueName: \"kubernetes.io/projected/604456a0-4997-43bc-87ef-283a002111fe-kube-api-access-8sk7j\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:27.126735 master-0 kubenswrapper[7518]: I0313 12:37:27.126677 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x8kz\" (UniqueName: \"kubernetes.io/projected/3d653e1a-5903-4a02-9357-df145f028c0d-kube-api-access-6x8kz\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:27.141699 master-0 kubenswrapper[7518]: I0313 12:37:27.141668 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlvjp\" (UniqueName: \"kubernetes.io/projected/5ae41cff-0949-47f8-aae9-ae133191476d-kube-api-access-mlvjp\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:37:27.179880 master-0 kubenswrapper[7518]: I0313 12:37:27.179826 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgbvr\" (UniqueName: \"kubernetes.io/projected/ce3a655a-0684-4bc5-ac36-5878507537c7-kube-api-access-vgbvr\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:37:27.198161 master-0 kubenswrapper[7518]: I0313 12:37:27.198097 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5nb7\" (UniqueName: \"kubernetes.io/projected/d3d998ee-b26f-4e30-83bc-f94f8c68060a-kube-api-access-x5nb7\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:27.198904 master-0 kubenswrapper[7518]: I0313 12:37:27.198787 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:27.199178 master-0 kubenswrapper[7518]: E0313 12:37:27.198978 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:37:27.199178 master-0 kubenswrapper[7518]: E0313 12:37:27.199093 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert podName:10944f9c-8ce9-44e6-9c36-a0ea19d8cae3 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.199046289 +0000 UTC m=+2.832115476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert") pod "catalog-operator-7d9c49f57b-tlnkd" (UID: "10944f9c-8ce9-44e6-9c36-a0ea19d8cae3") : secret "catalog-operator-serving-cert" not found Mar 13 12:37:27.199403 master-0 kubenswrapper[7518]: I0313 12:37:27.199190 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:27.199403 master-0 kubenswrapper[7518]: I0313 12:37:27.199306 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:27.200110 master-0 kubenswrapper[7518]: E0313 12:37:27.199540 7518 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:27.200110 master-0 kubenswrapper[7518]: E0313 12:37:27.199579 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert podName:f39d7f76-0075-44c3-9101-eb2607cb176a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.199569376 +0000 UTC m=+2.832638563 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert") pod "cluster-version-operator-745944c6b7-mbjxt" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:27.200110 master-0 kubenswrapper[7518]: E0313 12:37:27.199644 7518 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:27.200110 master-0 kubenswrapper[7518]: E0313 12:37:27.199676 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls podName:2f79578c-bbfb-4968-893a-730deb4c01f9 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.199668487 +0000 UTC m=+2.832737674 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls") pod "ingress-operator-677db989d6-ckl2j" (UID: "2f79578c-bbfb-4968-893a-730deb4c01f9") : secret "metrics-tls" not found Mar 13 12:37:27.203119 master-0 kubenswrapper[7518]: I0313 12:37:27.203081 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:27.219867 master-0 kubenswrapper[7518]: I0313 12:37:27.219820 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg8tz\" (UniqueName: \"kubernetes.io/projected/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-kube-api-access-vg8tz\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:37:27.232221 master-0 kubenswrapper[7518]: E0313 12:37:27.232129 7518 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" Mar 13 12:37:27.237785 master-0 kubenswrapper[7518]: E0313 12:37:27.232473 7518 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56,Command:[cluster-kube-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56,ValueFrom:nil,},EnvVar{Name:CLUSTER_POLICY_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609,ValueFrom:nil,},EnvVar{Name:TOOLS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35768a0c3eb24134dd38633e8acfc7db69ee96b2fd660e9bba3b8c996452fef7,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.31.14,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-operator-86d7cdfdfb-br96g_openshift-kube-controller-manager-operator(77ef7e49-eb85-4f5e-94d3-a6a8619a6243): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 12:37:27.237785 master-0 kubenswrapper[7518]: E0313 12:37:27.235148 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" podUID="77ef7e49-eb85-4f5e-94d3-a6a8619a6243" Mar 13 12:37:27.245839 master-0 kubenswrapper[7518]: I0313 12:37:27.245781 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c24hd\" (UniqueName: \"kubernetes.io/projected/3020d236-03e0-4916-97dd-f1085632ca43-kube-api-access-c24hd\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:27.260869 master-0 kubenswrapper[7518]: I0313 12:37:27.260751 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9psfn\" (UniqueName: \"kubernetes.io/projected/4c0b18db-06ad-4d58-a353-f6fd96309dea-kube-api-access-9psfn\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:27.280914 master-0 kubenswrapper[7518]: I0313 12:37:27.280834 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8hcd\" (UniqueName: \"kubernetes.io/projected/d5a19b80-d488-46d3-a4a8-0b80361077e1-kube-api-access-p8hcd\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:27.301484 master-0 kubenswrapper[7518]: I0313 12:37:27.301410 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnw9d\" (UniqueName: \"kubernetes.io/projected/4dd0fc2f-f2ee-4447-a747-04a178288cf0-kube-api-access-fnw9d\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:37:27.302185 master-0 kubenswrapper[7518]: I0313 12:37:27.302033 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:27.303044 master-0 kubenswrapper[7518]: E0313 12:37:27.302375 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:37:27.303044 master-0 kubenswrapper[7518]: E0313 12:37:27.302521 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:37:27.303044 master-0 kubenswrapper[7518]: E0313 12:37:27.302517 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert podName:d5a19b80-d488-46d3-a4a8-0b80361077e1 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.302450027 +0000 UTC m=+2.935519264 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert") pod "olm-operator-d64cfc9db-rfqb9" (UID: "d5a19b80-d488-46d3-a4a8-0b80361077e1") : secret "olm-operator-serving-cert" not found Mar 13 12:37:27.303044 master-0 kubenswrapper[7518]: E0313 12:37:27.302588 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert podName:3d653e1a-5903-4a02-9357-df145f028c0d nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.302572049 +0000 UTC m=+2.935641236 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-669qk" (UID: "3d653e1a-5903-4a02-9357-df145f028c0d") : secret "package-server-manager-serving-cert" not found Mar 13 12:37:27.303044 master-0 kubenswrapper[7518]: I0313 12:37:27.302393 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:27.303044 master-0 kubenswrapper[7518]: I0313 12:37:27.302647 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:27.303044 master-0 kubenswrapper[7518]: I0313 12:37:27.302676 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:27.303044 master-0 kubenswrapper[7518]: I0313 12:37:27.302703 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:27.303044 master-0 kubenswrapper[7518]: I0313 12:37:27.302738 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:27.303044 master-0 kubenswrapper[7518]: I0313 12:37:27.302762 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:27.303044 master-0 kubenswrapper[7518]: I0313 12:37:27.302784 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:27.303044 master-0 kubenswrapper[7518]: I0313 12:37:27.302807 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:27.303044 master-0 kubenswrapper[7518]: I0313 12:37:27.302829 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:27.303044 master-0 kubenswrapper[7518]: E0313 12:37:27.302978 7518 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:27.303044 master-0 kubenswrapper[7518]: E0313 12:37:27.303015 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.303004815 +0000 UTC m=+2.936074002 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:27.303934 master-0 kubenswrapper[7518]: E0313 12:37:27.303067 7518 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:37:27.303934 master-0 kubenswrapper[7518]: E0313 12:37:27.303098 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics podName:d3d998ee-b26f-4e30-83bc-f94f8c68060a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.303087866 +0000 UTC m=+2.936157133 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7qhr4" (UID: "d3d998ee-b26f-4e30-83bc-f94f8c68060a") : secret "marketplace-operator-metrics" not found Mar 13 12:37:27.303934 master-0 kubenswrapper[7518]: E0313 12:37:27.303179 7518 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:37:27.303934 master-0 kubenswrapper[7518]: E0313 12:37:27.303208 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.303200157 +0000 UTC m=+2.936269344 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "node-tuning-operator-tls" not found Mar 13 12:37:27.303934 master-0 kubenswrapper[7518]: E0313 12:37:27.303262 7518 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:37:27.303934 master-0 kubenswrapper[7518]: E0313 12:37:27.303285 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs podName:4c0b18db-06ad-4d58-a353-f6fd96309dea nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.303278479 +0000 UTC m=+2.936347756 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs") pod "multus-admission-controller-8d675b596-96gds" (UID: "4c0b18db-06ad-4d58-a353-f6fd96309dea") : secret "multus-admission-controller-secret" not found Mar 13 12:37:27.303934 master-0 kubenswrapper[7518]: E0313 12:37:27.303335 7518 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:27.303934 master-0 kubenswrapper[7518]: E0313 12:37:27.303362 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls podName:13f32761-b386-4f93-b3c0-b16ea53d338a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.30335407 +0000 UTC m=+2.936423347 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls") pod "dns-operator-589895fbb7-mmwk7" (UID: "13f32761-b386-4f93-b3c0-b16ea53d338a") : secret "metrics-tls" not found Mar 13 12:37:27.303934 master-0 kubenswrapper[7518]: E0313 12:37:27.303420 7518 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:37:27.303934 master-0 kubenswrapper[7518]: E0313 12:37:27.303444 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls podName:bcf05594-4c10-4b54-a47c-d55e323f1f87 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.303437151 +0000 UTC m=+2.936506338 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-q287n" (UID: "bcf05594-4c10-4b54-a47c-d55e323f1f87") : secret "image-registry-operator-tls" not found Mar 13 12:37:27.303934 master-0 kubenswrapper[7518]: E0313 12:37:27.303493 7518 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:37:27.303934 master-0 kubenswrapper[7518]: E0313 12:37:27.303549 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs podName:29b6aa89-0416-4595-9deb-10b290521d86 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.303541203 +0000 UTC m=+2.936610380 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs") pod "network-metrics-daemon-r9lmb" (UID: "29b6aa89-0416-4595-9deb-10b290521d86") : secret "metrics-daemon-secret" not found Mar 13 12:37:27.304534 master-0 kubenswrapper[7518]: E0313 12:37:27.304446 7518 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:27.304534 master-0 kubenswrapper[7518]: E0313 12:37:27.304505 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls podName:604456a0-4997-43bc-87ef-283a002111fe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:28.304493876 +0000 UTC m=+2.937563123 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-zwtdz" (UID: "604456a0-4997-43bc-87ef-283a002111fe") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:27.322991 master-0 kubenswrapper[7518]: I0313 12:37:27.322918 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brzd4\" (UniqueName: \"kubernetes.io/projected/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-kube-api-access-brzd4\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:37:27.342283 master-0 kubenswrapper[7518]: I0313 12:37:27.342231 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2p67\" (UniqueName: \"kubernetes.io/projected/13f32761-b386-4f93-b3c0-b16ea53d338a-kube-api-access-m2p67\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:27.361719 master-0 kubenswrapper[7518]: I0313 12:37:27.361660 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xstz5\" (UniqueName: \"kubernetes.io/projected/08e2bc8e-ca80-454c-81dc-211d122e32e0-kube-api-access-xstz5\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:37:27.380396 master-0 kubenswrapper[7518]: I0313 12:37:27.380322 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbtjs\" (UniqueName: \"kubernetes.io/projected/29b6aa89-0416-4595-9deb-10b290521d86-kube-api-access-cbtjs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:27.400234 master-0 kubenswrapper[7518]: I0313 12:37:27.400160 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmfxj\" (UniqueName: \"kubernetes.io/projected/887d261f-d07f-4ef0-a230-6568f47acf4d-kube-api-access-pmfxj\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:37:27.419448 master-0 kubenswrapper[7518]: I0313 12:37:27.419396 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w5r2\" (UniqueName: \"kubernetes.io/projected/034aaf8e-95df-4171-bae4-e7abe58d15f7-kube-api-access-5w5r2\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:37:27.440798 master-0 kubenswrapper[7518]: I0313 12:37:27.440758 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km69t\" (UniqueName: \"kubernetes.io/projected/152689b1-5875-4a9a-bb25-bee858523168-kube-api-access-km69t\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:37:27.511313 master-0 kubenswrapper[7518]: I0313 12:37:27.510413 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4hd6\" (UniqueName: \"kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-kube-api-access-j4hd6\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:27.525156 master-0 kubenswrapper[7518]: I0313 12:37:27.524993 7518 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 13 12:37:27.525156 master-0 kubenswrapper[7518]: I0313 12:37:27.525124 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4tnq\" (UniqueName: \"kubernetes.io/projected/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-kube-api-access-m4tnq\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:37:27.529836 master-0 kubenswrapper[7518]: I0313 12:37:27.529765 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btf8q\" (UniqueName: \"kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q\") pod \"network-check-target-pnwsc\" (UID: \"269aedfd-4274-4998-bd0d-603b67257666\") " pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:27.588839 master-0 kubenswrapper[7518]: I0313 12:37:27.588806 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:27.613876 master-0 kubenswrapper[7518]: I0313 12:37:27.613533 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:27.780889 master-0 kubenswrapper[7518]: I0313 12:37:27.780859 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:27.913976 master-0 kubenswrapper[7518]: E0313 12:37:27.913915 7518 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b" Mar 13 12:37:27.914203 master-0 kubenswrapper[7518]: E0313 12:37:27.914072 7518 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b,Command:[cluster-openshift-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:ROUTE_CONTROLLER_MANAGER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8cf2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-controller-manager-operator-8565d84698-hj2wk_openshift-controller-manager-operator(8c62b15f-001a-4b64-b85f-348aefde5d1b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 12:37:27.915369 master-0 kubenswrapper[7518]: E0313 12:37:27.915315 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" podUID="8c62b15f-001a-4b64-b85f-348aefde5d1b" Mar 13 12:37:28.215694 master-0 kubenswrapper[7518]: I0313 12:37:28.215577 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:28.215694 master-0 kubenswrapper[7518]: I0313 12:37:28.215623 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:28.215694 master-0 kubenswrapper[7518]: I0313 12:37:28.215653 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:28.215937 master-0 kubenswrapper[7518]: E0313 12:37:28.215796 7518 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:28.215937 master-0 kubenswrapper[7518]: E0313 12:37:28.215894 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls podName:2f79578c-bbfb-4968-893a-730deb4c01f9 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:30.215866896 +0000 UTC m=+4.848936133 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls") pod "ingress-operator-677db989d6-ckl2j" (UID: "2f79578c-bbfb-4968-893a-730deb4c01f9") : secret "metrics-tls" not found Mar 13 12:37:28.215996 master-0 kubenswrapper[7518]: E0313 12:37:28.215948 7518 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:28.216024 master-0 kubenswrapper[7518]: E0313 12:37:28.216008 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert podName:f39d7f76-0075-44c3-9101-eb2607cb176a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:30.215991698 +0000 UTC m=+4.849060885 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert") pod "cluster-version-operator-745944c6b7-mbjxt" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:28.216161 master-0 kubenswrapper[7518]: E0313 12:37:28.216101 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:37:28.216239 master-0 kubenswrapper[7518]: E0313 12:37:28.216223 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert podName:10944f9c-8ce9-44e6-9c36-a0ea19d8cae3 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:30.216198851 +0000 UTC m=+4.849268088 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert") pod "catalog-operator-7d9c49f57b-tlnkd" (UID: "10944f9c-8ce9-44e6-9c36-a0ea19d8cae3") : secret "catalog-operator-serving-cert" not found Mar 13 12:37:28.316742 master-0 kubenswrapper[7518]: I0313 12:37:28.316687 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:28.317009 master-0 kubenswrapper[7518]: I0313 12:37:28.316956 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:28.317061 master-0 kubenswrapper[7518]: I0313 12:37:28.317043 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:28.317208 master-0 kubenswrapper[7518]: I0313 12:37:28.317187 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:28.317260 master-0 kubenswrapper[7518]: E0313 12:37:28.317201 7518 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:37:28.317301 master-0 kubenswrapper[7518]: E0313 12:37:28.317273 7518 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:37:28.317367 master-0 kubenswrapper[7518]: E0313 12:37:28.317344 7518 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:28.317521 master-0 kubenswrapper[7518]: I0313 12:37:28.317217 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:28.317521 master-0 kubenswrapper[7518]: E0313 12:37:28.317309 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs podName:4c0b18db-06ad-4d58-a353-f6fd96309dea nodeName:}" failed. No retries permitted until 2026-03-13 12:37:30.317281047 +0000 UTC m=+4.950350304 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs") pod "multus-admission-controller-8d675b596-96gds" (UID: "4c0b18db-06ad-4d58-a353-f6fd96309dea") : secret "multus-admission-controller-secret" not found Mar 13 12:37:28.317636 master-0 kubenswrapper[7518]: E0313 12:37:28.317315 7518 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:37:28.317636 master-0 kubenswrapper[7518]: E0313 12:37:28.317452 7518 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:28.317721 master-0 kubenswrapper[7518]: E0313 12:37:28.317643 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:30.31753068 +0000 UTC m=+4.950599867 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "node-tuning-operator-tls" not found Mar 13 12:37:28.317721 master-0 kubenswrapper[7518]: E0313 12:37:28.317663 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:30.317654952 +0000 UTC m=+4.950724139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:28.317721 master-0 kubenswrapper[7518]: I0313 12:37:28.317682 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:28.317887 master-0 kubenswrapper[7518]: I0313 12:37:28.317743 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:28.317887 master-0 kubenswrapper[7518]: E0313 12:37:28.317767 7518 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:37:28.317887 master-0 kubenswrapper[7518]: E0313 12:37:28.317845 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls podName:bcf05594-4c10-4b54-a47c-d55e323f1f87 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:30.317832794 +0000 UTC m=+4.950902011 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-q287n" (UID: "bcf05594-4c10-4b54-a47c-d55e323f1f87") : secret "image-registry-operator-tls" not found Mar 13 12:37:28.317999 master-0 kubenswrapper[7518]: E0313 12:37:28.317910 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls podName:13f32761-b386-4f93-b3c0-b16ea53d338a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:30.317853244 +0000 UTC m=+4.950922511 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls") pod "dns-operator-589895fbb7-mmwk7" (UID: "13f32761-b386-4f93-b3c0-b16ea53d338a") : secret "metrics-tls" not found Mar 13 12:37:28.317999 master-0 kubenswrapper[7518]: E0313 12:37:28.317916 7518 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:28.317999 master-0 kubenswrapper[7518]: E0313 12:37:28.317927 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs podName:29b6aa89-0416-4595-9deb-10b290521d86 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:30.317921015 +0000 UTC m=+4.950990202 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs") pod "network-metrics-daemon-r9lmb" (UID: "29b6aa89-0416-4595-9deb-10b290521d86") : secret "metrics-daemon-secret" not found Mar 13 12:37:28.317999 master-0 kubenswrapper[7518]: E0313 12:37:28.317961 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls podName:604456a0-4997-43bc-87ef-283a002111fe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:30.317947396 +0000 UTC m=+4.951016583 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-zwtdz" (UID: "604456a0-4997-43bc-87ef-283a002111fe") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:28.317999 master-0 kubenswrapper[7518]: E0313 12:37:28.317972 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:37:28.317999 master-0 kubenswrapper[7518]: E0313 12:37:28.318005 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert podName:d5a19b80-d488-46d3-a4a8-0b80361077e1 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:30.317992416 +0000 UTC m=+4.951061613 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert") pod "olm-operator-d64cfc9db-rfqb9" (UID: "d5a19b80-d488-46d3-a4a8-0b80361077e1") : secret "olm-operator-serving-cert" not found Mar 13 12:37:28.318749 master-0 kubenswrapper[7518]: I0313 12:37:28.318007 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:28.318749 master-0 kubenswrapper[7518]: I0313 12:37:28.318473 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:28.318749 master-0 kubenswrapper[7518]: I0313 12:37:28.318522 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:28.318749 master-0 kubenswrapper[7518]: E0313 12:37:28.318591 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:37:28.318749 master-0 kubenswrapper[7518]: E0313 12:37:28.318598 7518 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:37:28.318749 master-0 kubenswrapper[7518]: E0313 12:37:28.318634 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert podName:3d653e1a-5903-4a02-9357-df145f028c0d nodeName:}" failed. No retries permitted until 2026-03-13 12:37:30.318619765 +0000 UTC m=+4.951688952 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-669qk" (UID: "3d653e1a-5903-4a02-9357-df145f028c0d") : secret "package-server-manager-serving-cert" not found Mar 13 12:37:28.318749 master-0 kubenswrapper[7518]: E0313 12:37:28.318649 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics podName:d3d998ee-b26f-4e30-83bc-f94f8c68060a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:30.318642666 +0000 UTC m=+4.951711853 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7qhr4" (UID: "d3d998ee-b26f-4e30-83bc-f94f8c68060a") : secret "marketplace-operator-metrics" not found Mar 13 12:37:28.666156 master-0 kubenswrapper[7518]: I0313 12:37:28.666100 7518 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:37:28.666645 master-0 kubenswrapper[7518]: I0313 12:37:28.666187 7518 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:37:28.719742 master-0 kubenswrapper[7518]: E0313 12:37:28.719660 7518 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9" Mar 13 12:37:28.720014 master-0 kubenswrapper[7518]: E0313 12:37:28.719920 7518 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-storage-version-migrator-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9,Command:[cluster-kube-storage-version-migrator-operator start],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9q2qc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-storage-version-migrator-operator-7f65c457f5-hrm82_openshift-kube-storage-version-migrator-operator(f5775266-5e58-44ed-81cb-dfe3faf38add): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 12:37:28.822211 master-0 kubenswrapper[7518]: E0313 12:37:28.723266 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" podUID="f5775266-5e58-44ed-81cb-dfe3faf38add" Mar 13 12:37:29.404109 master-0 kubenswrapper[7518]: I0313 12:37:29.404014 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:29.410518 master-0 kubenswrapper[7518]: I0313 12:37:29.410214 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:29.472505 master-0 kubenswrapper[7518]: E0313 12:37:29.472439 7518 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" Mar 13 12:37:29.472828 master-0 kubenswrapper[7518]: E0313 12:37:29.472758 7518 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:etcd-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3,Command:[cluster-etcd-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml --terminate-on-files=/var/run/secrets/serving-cert/tls.crt --terminate-on-files=/var/run/secrets/serving-cert/tls.key --terminate-on-files=/var/run/secrets/etcd-client/tls.crt --terminate-on-files=/var/run/secrets/etcd-client/tls.key --terminate-on-files=/var/run/configmaps/etcd-ca/ca-bundle.crt --terminate-on-files=/var/run/configmaps/etcd-service-ca/service-ca.crt],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPENSHIFT_PROFILE,Value:web,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-ca,ReadOnly:false,MountPath:/var/run/configmaps/etcd-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-service-ca,ReadOnly:false,MountPath:/var/run/configmaps/etcd-service-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-client,ReadOnly:false,MountPath:/var/run/secrets/etcd-client,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clrz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-operator-5884b9cd56-hjzms_openshift-etcd-operator(15b592d6-3c48-45d4-9172-d28632ae8995): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 12:37:29.474087 master-0 kubenswrapper[7518]: E0313 12:37:29.474032 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" podUID="15b592d6-3c48-45d4-9172-d28632ae8995" Mar 13 12:37:29.611101 master-0 kubenswrapper[7518]: I0313 12:37:29.611057 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:29.615517 master-0 kubenswrapper[7518]: I0313 12:37:29.615474 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:37:29.973110 master-0 kubenswrapper[7518]: E0313 12:37:29.973027 7518 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab" Mar 13 12:37:29.973660 master-0 kubenswrapper[7518]: E0313 12:37:29.973274 7518 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-apiserver-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab,Command:[cluster-openshift-apiserver-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:KUBE_APISERVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vg8tz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-apiserver-operator-799b6db4d7-xchrj_openshift-apiserver-operator(089cfabc-9d3d-4260-bb16-8b5eaf73b3fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 12:37:29.974529 master-0 kubenswrapper[7518]: E0313 12:37:29.974485 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" podUID="089cfabc-9d3d-4260-bb16-8b5eaf73b3fa" Mar 13 12:37:30.244880 master-0 kubenswrapper[7518]: I0313 12:37:30.244778 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:30.244880 master-0 kubenswrapper[7518]: I0313 12:37:30.244824 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:30.244880 master-0 kubenswrapper[7518]: I0313 12:37:30.244848 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:30.245092 master-0 kubenswrapper[7518]: E0313 12:37:30.245028 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:37:30.245127 master-0 kubenswrapper[7518]: E0313 12:37:30.245111 7518 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:30.245177 master-0 kubenswrapper[7518]: E0313 12:37:30.245118 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert podName:10944f9c-8ce9-44e6-9c36-a0ea19d8cae3 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:34.245093968 +0000 UTC m=+8.878163155 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert") pod "catalog-operator-7d9c49f57b-tlnkd" (UID: "10944f9c-8ce9-44e6-9c36-a0ea19d8cae3") : secret "catalog-operator-serving-cert" not found Mar 13 12:37:30.245177 master-0 kubenswrapper[7518]: E0313 12:37:30.245175 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert podName:f39d7f76-0075-44c3-9101-eb2607cb176a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:34.245159669 +0000 UTC m=+8.878228936 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert") pod "cluster-version-operator-745944c6b7-mbjxt" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:30.246385 master-0 kubenswrapper[7518]: E0313 12:37:30.245338 7518 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:30.246385 master-0 kubenswrapper[7518]: E0313 12:37:30.245447 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls podName:2f79578c-bbfb-4968-893a-730deb4c01f9 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:34.245424853 +0000 UTC m=+8.878494040 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls") pod "ingress-operator-677db989d6-ckl2j" (UID: "2f79578c-bbfb-4968-893a-730deb4c01f9") : secret "metrics-tls" not found Mar 13 12:37:30.346000 master-0 kubenswrapper[7518]: I0313 12:37:30.345531 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:30.346000 master-0 kubenswrapper[7518]: E0313 12:37:30.345707 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:37:30.346000 master-0 kubenswrapper[7518]: E0313 12:37:30.345798 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert podName:d5a19b80-d488-46d3-a4a8-0b80361077e1 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:34.345776567 +0000 UTC m=+8.978845804 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert") pod "olm-operator-d64cfc9db-rfqb9" (UID: "d5a19b80-d488-46d3-a4a8-0b80361077e1") : secret "olm-operator-serving-cert" not found Mar 13 12:37:30.346417 master-0 kubenswrapper[7518]: I0313 12:37:30.346300 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:30.346417 master-0 kubenswrapper[7518]: I0313 12:37:30.346343 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:30.346521 master-0 kubenswrapper[7518]: I0313 12:37:30.346415 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:30.346521 master-0 kubenswrapper[7518]: I0313 12:37:30.346459 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:30.346521 master-0 kubenswrapper[7518]: I0313 12:37:30.346491 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:30.346521 master-0 kubenswrapper[7518]: I0313 12:37:30.346516 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:30.346656 master-0 kubenswrapper[7518]: I0313 12:37:30.346543 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:30.346656 master-0 kubenswrapper[7518]: I0313 12:37:30.346570 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:30.346656 master-0 kubenswrapper[7518]: I0313 12:37:30.346604 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:30.346739 master-0 kubenswrapper[7518]: E0313 12:37:30.346684 7518 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:30.346739 master-0 kubenswrapper[7518]: E0313 12:37:30.346717 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls podName:604456a0-4997-43bc-87ef-283a002111fe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:34.346706761 +0000 UTC m=+8.979775948 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-zwtdz" (UID: "604456a0-4997-43bc-87ef-283a002111fe") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:30.346799 master-0 kubenswrapper[7518]: E0313 12:37:30.346765 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:37:30.346799 master-0 kubenswrapper[7518]: E0313 12:37:30.346791 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert podName:3d653e1a-5903-4a02-9357-df145f028c0d nodeName:}" failed. No retries permitted until 2026-03-13 12:37:34.346782792 +0000 UTC m=+8.979851979 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-669qk" (UID: "3d653e1a-5903-4a02-9357-df145f028c0d") : secret "package-server-manager-serving-cert" not found Mar 13 12:37:30.346895 master-0 kubenswrapper[7518]: E0313 12:37:30.346874 7518 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:37:30.346931 master-0 kubenswrapper[7518]: E0313 12:37:30.346909 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics podName:d3d998ee-b26f-4e30-83bc-f94f8c68060a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:34.346900094 +0000 UTC m=+8.979969351 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7qhr4" (UID: "d3d998ee-b26f-4e30-83bc-f94f8c68060a") : secret "marketplace-operator-metrics" not found Mar 13 12:37:30.347007 master-0 kubenswrapper[7518]: E0313 12:37:30.346976 7518 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:37:30.347051 master-0 kubenswrapper[7518]: E0313 12:37:30.347010 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:34.347000355 +0000 UTC m=+8.980069542 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "node-tuning-operator-tls" not found Mar 13 12:37:30.347130 master-0 kubenswrapper[7518]: E0313 12:37:30.347063 7518 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:37:30.347130 master-0 kubenswrapper[7518]: E0313 12:37:30.347088 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs podName:4c0b18db-06ad-4d58-a353-f6fd96309dea nodeName:}" failed. No retries permitted until 2026-03-13 12:37:34.347080126 +0000 UTC m=+8.980149413 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs") pod "multus-admission-controller-8d675b596-96gds" (UID: "4c0b18db-06ad-4d58-a353-f6fd96309dea") : secret "multus-admission-controller-secret" not found Mar 13 12:37:30.347130 master-0 kubenswrapper[7518]: E0313 12:37:30.347128 7518 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:30.347375 master-0 kubenswrapper[7518]: E0313 12:37:30.347171 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:34.347163127 +0000 UTC m=+8.980232314 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:30.347375 master-0 kubenswrapper[7518]: E0313 12:37:30.347216 7518 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:37:30.347375 master-0 kubenswrapper[7518]: E0313 12:37:30.347254 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls podName:bcf05594-4c10-4b54-a47c-d55e323f1f87 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:34.347246038 +0000 UTC m=+8.980315225 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-q287n" (UID: "bcf05594-4c10-4b54-a47c-d55e323f1f87") : secret "image-registry-operator-tls" not found Mar 13 12:37:30.347375 master-0 kubenswrapper[7518]: E0313 12:37:30.347316 7518 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:30.347375 master-0 kubenswrapper[7518]: E0313 12:37:30.347345 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls podName:13f32761-b386-4f93-b3c0-b16ea53d338a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:34.347337249 +0000 UTC m=+8.980406436 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls") pod "dns-operator-589895fbb7-mmwk7" (UID: "13f32761-b386-4f93-b3c0-b16ea53d338a") : secret "metrics-tls" not found Mar 13 12:37:30.347523 master-0 kubenswrapper[7518]: E0313 12:37:30.347391 7518 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:37:30.347523 master-0 kubenswrapper[7518]: E0313 12:37:30.347418 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs podName:29b6aa89-0416-4595-9deb-10b290521d86 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:34.347410401 +0000 UTC m=+8.980479588 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs") pod "network-metrics-daemon-r9lmb" (UID: "29b6aa89-0416-4595-9deb-10b290521d86") : secret "metrics-daemon-secret" not found Mar 13 12:37:30.522670 master-0 kubenswrapper[7518]: E0313 12:37:30.522615 7518 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba" Mar 13 12:37:30.522847 master-0 kubenswrapper[7518]: E0313 12:37:30.522799 7518 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5w5r2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-69b6fc6b88-vmscz_openshift-service-ca-operator(034aaf8e-95df-4171-bae4-e7abe58d15f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 12:37:30.524779 master-0 kubenswrapper[7518]: E0313 12:37:30.524175 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" podUID="034aaf8e-95df-4171-bae4-e7abe58d15f7" Mar 13 12:37:31.400187 master-0 kubenswrapper[7518]: E0313 12:37:31.400120 7518 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:a08ee80b9f3d3c47fdf94f5d2693a567b117b5834147ff3504c40915260d8ffa: Get \"https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:a08ee80b9f3d3c47fdf94f5d2693a567b117b5834147ff3504c40915260d8ffa\": context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43" Mar 13 12:37:31.400850 master-0 kubenswrapper[7518]: E0313 12:37:31.400815 7518 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:openshift-api,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43,Command:[write-available-featuresets --asset-output-dir=/available-featuregates --payload-version=$(OPERATOR_IMAGE_VERSION)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:available-featuregates,ReadOnly:false,MountPath:/available-featuregates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwkdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-config-operator-64488f9d78-t8fb4_openshift-config-operator(f0803181-4e37-43fa-8ddc-9c76d3f61817): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:a08ee80b9f3d3c47fdf94f5d2693a567b117b5834147ff3504c40915260d8ffa: Get \"https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:a08ee80b9f3d3c47fdf94f5d2693a567b117b5834147ff3504c40915260d8ffa\": context canceled" logger="UnhandledError" Mar 13 12:37:31.403167 master-0 kubenswrapper[7518]: E0313 12:37:31.403031 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-api\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:a08ee80b9f3d3c47fdf94f5d2693a567b117b5834147ff3504c40915260d8ffa: Get \\\"https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:a08ee80b9f3d3c47fdf94f5d2693a567b117b5834147ff3504c40915260d8ffa\\\": context canceled\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" Mar 13 12:37:31.422243 master-0 kubenswrapper[7518]: E0313 12:37:31.422194 7518 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460" Mar 13 12:37:31.422651 master-0 kubenswrapper[7518]: E0313 12:37:31.422565 7518 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xstz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-qz6pg_openshift-network-operator(08e2bc8e-ca80-454c-81dc-211d122e32e0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 12:37:31.423853 master-0 kubenswrapper[7518]: E0313 12:37:31.423788 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-network-operator/iptables-alerter-qz6pg" podUID="08e2bc8e-ca80-454c-81dc-211d122e32e0" Mar 13 12:37:31.631383 master-0 kubenswrapper[7518]: I0313 12:37:31.630941 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-pnwsc"] Mar 13 12:37:31.646460 master-0 kubenswrapper[7518]: W0313 12:37:31.646408 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod269aedfd_4274_4998_bd0d_603b67257666.slice/crio-ac4a42c40018650481568cd3e3f0125e785e9eec1d03bfa3009fd0ee7e80a629 WatchSource:0}: Error finding container ac4a42c40018650481568cd3e3f0125e785e9eec1d03bfa3009fd0ee7e80a629: Status 404 returned error can't find the container with id ac4a42c40018650481568cd3e3f0125e785e9eec1d03bfa3009fd0ee7e80a629 Mar 13 12:37:31.687390 master-0 kubenswrapper[7518]: I0313 12:37:31.678043 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" event={"ID":"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a","Type":"ContainerStarted","Data":"13a298fff8d915caaf89a785573e9b3488b88852d2c326a75e61c523b3cd60a0"} Mar 13 12:37:31.687390 master-0 kubenswrapper[7518]: I0313 12:37:31.680559 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-pnwsc" event={"ID":"269aedfd-4274-4998-bd0d-603b67257666","Type":"ContainerStarted","Data":"ac4a42c40018650481568cd3e3f0125e785e9eec1d03bfa3009fd0ee7e80a629"} Mar 13 12:37:31.687390 master-0 kubenswrapper[7518]: I0313 12:37:31.682569 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" event={"ID":"887d261f-d07f-4ef0-a230-6568f47acf4d","Type":"ContainerDied","Data":"531c8b5824f7a1f7f686e430cb7bccc435fffb1f3a305f83070f80c2e1535620"} Mar 13 12:37:31.687390 master-0 kubenswrapper[7518]: I0313 12:37:31.682510 7518 generic.go:334] "Generic (PLEG): container finished" podID="887d261f-d07f-4ef0-a230-6568f47acf4d" containerID="531c8b5824f7a1f7f686e430cb7bccc435fffb1f3a305f83070f80c2e1535620" exitCode=0 Mar 13 12:37:31.687390 master-0 kubenswrapper[7518]: I0313 12:37:31.685753 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd" event={"ID":"4e279dcc-35e2-4503-babc-978ac208c150","Type":"ContainerStarted","Data":"6d3a11a8a9fe0d5dca51d9ed392850f6788ebc18ced1ae2a2591ab3c73418318"} Mar 13 12:37:32.967242 master-0 kubenswrapper[7518]: I0313 12:37:32.961772 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" event={"ID":"0da84bb7-e936-49a0-96b5-614a1305d6a4","Type":"ContainerStarted","Data":"7049109a836522af070e6bb63ef4a03a6cf57954c7a7d1ea2471e59144150127"} Mar 13 12:37:32.967242 master-0 kubenswrapper[7518]: I0313 12:37:32.963643 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-pnwsc" event={"ID":"269aedfd-4274-4998-bd0d-603b67257666","Type":"ContainerStarted","Data":"b052f3c7a3c4c78d68f40c9d29ccab5224ececce15cf2b2f1ec3f0c092b6b2f2"} Mar 13 12:37:33.573522 master-0 kubenswrapper[7518]: I0313 12:37:33.573415 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:33.573789 master-0 kubenswrapper[7518]: I0313 12:37:33.573678 7518 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:37:33.573789 master-0 kubenswrapper[7518]: I0313 12:37:33.573689 7518 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:37:33.614008 master-0 kubenswrapper[7518]: I0313 12:37:33.613886 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:33.798164 master-0 kubenswrapper[7518]: I0313 12:37:33.794468 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:37:33.966484 master-0 kubenswrapper[7518]: I0313 12:37:33.966375 7518 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:37:34.146417 master-0 kubenswrapper[7518]: I0313 12:37:34.146207 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:34.156873 master-0 kubenswrapper[7518]: I0313 12:37:34.156831 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:34.271889 master-0 kubenswrapper[7518]: I0313 12:37:34.271278 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2"] Mar 13 12:37:34.271889 master-0 kubenswrapper[7518]: E0313 12:37:34.271489 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ba330e-35ca-4d05-8641-a880bf30c0e7" containerName="assisted-installer-controller" Mar 13 12:37:34.271889 master-0 kubenswrapper[7518]: I0313 12:37:34.271507 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ba330e-35ca-4d05-8641-a880bf30c0e7" containerName="assisted-installer-controller" Mar 13 12:37:34.271889 master-0 kubenswrapper[7518]: E0313 12:37:34.271519 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dca6a91-7c31-44d2-89eb-c2c5f941e983" containerName="prober" Mar 13 12:37:34.271889 master-0 kubenswrapper[7518]: I0313 12:37:34.271525 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dca6a91-7c31-44d2-89eb-c2c5f941e983" containerName="prober" Mar 13 12:37:34.271889 master-0 kubenswrapper[7518]: I0313 12:37:34.271589 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dca6a91-7c31-44d2-89eb-c2c5f941e983" containerName="prober" Mar 13 12:37:34.271889 master-0 kubenswrapper[7518]: I0313 12:37:34.271599 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ba330e-35ca-4d05-8641-a880bf30c0e7" containerName="assisted-installer-controller" Mar 13 12:37:34.272300 master-0 kubenswrapper[7518]: I0313 12:37:34.271930 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" Mar 13 12:37:34.299380 master-0 kubenswrapper[7518]: I0313 12:37:34.299322 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:34.299380 master-0 kubenswrapper[7518]: I0313 12:37:34.299360 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:34.299380 master-0 kubenswrapper[7518]: I0313 12:37:34.299387 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:34.299613 master-0 kubenswrapper[7518]: E0313 12:37:34.299494 7518 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:34.299613 master-0 kubenswrapper[7518]: E0313 12:37:34.299545 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls podName:2f79578c-bbfb-4968-893a-730deb4c01f9 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.299530364 +0000 UTC m=+16.932599551 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls") pod "ingress-operator-677db989d6-ckl2j" (UID: "2f79578c-bbfb-4968-893a-730deb4c01f9") : secret "metrics-tls" not found Mar 13 12:37:34.299613 master-0 kubenswrapper[7518]: E0313 12:37:34.299589 7518 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:34.299719 master-0 kubenswrapper[7518]: E0313 12:37:34.299674 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert podName:f39d7f76-0075-44c3-9101-eb2607cb176a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.299655226 +0000 UTC m=+16.932724473 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert") pod "cluster-version-operator-745944c6b7-mbjxt" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:34.299844 master-0 kubenswrapper[7518]: E0313 12:37:34.299821 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:37:34.299888 master-0 kubenswrapper[7518]: E0313 12:37:34.299877 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert podName:10944f9c-8ce9-44e6-9c36-a0ea19d8cae3 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.299858959 +0000 UTC m=+16.932928146 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert") pod "catalog-operator-7d9c49f57b-tlnkd" (UID: "10944f9c-8ce9-44e6-9c36-a0ea19d8cae3") : secret "catalog-operator-serving-cert" not found Mar 13 12:37:34.376706 master-0 kubenswrapper[7518]: I0313 12:37:34.376634 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2"] Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: I0313 12:37:34.400279 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: I0313 12:37:34.400345 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: I0313 12:37:34.400366 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: E0313 12:37:34.400505 7518 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: E0313 12:37:34.400595 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics podName:d3d998ee-b26f-4e30-83bc-f94f8c68060a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.400573839 +0000 UTC m=+17.033643046 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7qhr4" (UID: "d3d998ee-b26f-4e30-83bc-f94f8c68060a") : secret "marketplace-operator-metrics" not found Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: I0313 12:37:34.400640 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: I0313 12:37:34.400688 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: I0313 12:37:34.400717 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: I0313 12:37:34.400748 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: I0313 12:37:34.400770 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: I0313 12:37:34.400826 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t2jl\" (UniqueName: \"kubernetes.io/projected/c642c18f-f960-4418-bcb7-df884f8f8ad5-kube-api-access-8t2jl\") pod \"csi-snapshot-controller-7577d6f48-pjpn2\" (UID: \"c642c18f-f960-4418-bcb7-df884f8f8ad5\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: I0313 12:37:34.400861 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: E0313 12:37:34.401037 7518 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: E0313 12:37:34.401050 7518 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: E0313 12:37:34.401078 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs podName:4c0b18db-06ad-4d58-a353-f6fd96309dea nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.401065696 +0000 UTC m=+17.034134973 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs") pod "multus-admission-controller-8d675b596-96gds" (UID: "4c0b18db-06ad-4d58-a353-f6fd96309dea") : secret "multus-admission-controller-secret" not found Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: E0313 12:37:34.401158 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls podName:13f32761-b386-4f93-b3c0-b16ea53d338a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.401117347 +0000 UTC m=+17.034186554 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls") pod "dns-operator-589895fbb7-mmwk7" (UID: "13f32761-b386-4f93-b3c0-b16ea53d338a") : secret "metrics-tls" not found Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: E0313 12:37:34.401168 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:37:34.401184 master-0 kubenswrapper[7518]: E0313 12:37:34.401199 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert podName:d5a19b80-d488-46d3-a4a8-0b80361077e1 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.401187458 +0000 UTC m=+17.034256665 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert") pod "olm-operator-d64cfc9db-rfqb9" (UID: "d5a19b80-d488-46d3-a4a8-0b80361077e1") : secret "olm-operator-serving-cert" not found Mar 13 12:37:34.401879 master-0 kubenswrapper[7518]: E0313 12:37:34.401228 7518 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:37:34.401879 master-0 kubenswrapper[7518]: E0313 12:37:34.401239 7518 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:34.401879 master-0 kubenswrapper[7518]: E0313 12:37:34.401269 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls podName:bcf05594-4c10-4b54-a47c-d55e323f1f87 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.401248649 +0000 UTC m=+17.034317846 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-q287n" (UID: "bcf05594-4c10-4b54-a47c-d55e323f1f87") : secret "image-registry-operator-tls" not found Mar 13 12:37:34.401879 master-0 kubenswrapper[7518]: E0313 12:37:34.401289 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.401279099 +0000 UTC m=+17.034348296 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:34.401879 master-0 kubenswrapper[7518]: E0313 12:37:34.401305 7518 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:37:34.401879 master-0 kubenswrapper[7518]: E0313 12:37:34.401332 7518 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:34.401879 master-0 kubenswrapper[7518]: E0313 12:37:34.401330 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs podName:29b6aa89-0416-4595-9deb-10b290521d86 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.40132054 +0000 UTC m=+17.034389747 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs") pod "network-metrics-daemon-r9lmb" (UID: "29b6aa89-0416-4595-9deb-10b290521d86") : secret "metrics-daemon-secret" not found Mar 13 12:37:34.401879 master-0 kubenswrapper[7518]: E0313 12:37:34.401354 7518 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:37:34.401879 master-0 kubenswrapper[7518]: I0313 12:37:34.401420 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:34.401879 master-0 kubenswrapper[7518]: E0313 12:37:34.401434 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.401414461 +0000 UTC m=+17.034483638 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "node-tuning-operator-tls" not found Mar 13 12:37:34.401879 master-0 kubenswrapper[7518]: E0313 12:37:34.401480 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls podName:604456a0-4997-43bc-87ef-283a002111fe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.401471182 +0000 UTC m=+17.034540369 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-zwtdz" (UID: "604456a0-4997-43bc-87ef-283a002111fe") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:34.401879 master-0 kubenswrapper[7518]: E0313 12:37:34.401512 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:37:34.401879 master-0 kubenswrapper[7518]: E0313 12:37:34.401565 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert podName:3d653e1a-5903-4a02-9357-df145f028c0d nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.401551243 +0000 UTC m=+17.034620500 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-669qk" (UID: "3d653e1a-5903-4a02-9357-df145f028c0d") : secret "package-server-manager-serving-cert" not found Mar 13 12:37:34.507498 master-0 kubenswrapper[7518]: I0313 12:37:34.507431 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t2jl\" (UniqueName: \"kubernetes.io/projected/c642c18f-f960-4418-bcb7-df884f8f8ad5-kube-api-access-8t2jl\") pod \"csi-snapshot-controller-7577d6f48-pjpn2\" (UID: \"c642c18f-f960-4418-bcb7-df884f8f8ad5\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" Mar 13 12:37:34.545690 master-0 kubenswrapper[7518]: I0313 12:37:34.545576 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t2jl\" (UniqueName: \"kubernetes.io/projected/c642c18f-f960-4418-bcb7-df884f8f8ad5-kube-api-access-8t2jl\") pod \"csi-snapshot-controller-7577d6f48-pjpn2\" (UID: \"c642c18f-f960-4418-bcb7-df884f8f8ad5\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" Mar 13 12:37:34.595374 master-0 kubenswrapper[7518]: I0313 12:37:34.595328 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" Mar 13 12:37:35.216015 master-0 kubenswrapper[7518]: I0313 12:37:35.215981 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:35.219508 master-0 kubenswrapper[7518]: I0313 12:37:35.219482 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:35.463810 master-0 kubenswrapper[7518]: I0313 12:37:35.463756 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2"] Mar 13 12:37:35.785045 master-0 kubenswrapper[7518]: I0313 12:37:35.784729 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:35.785303 master-0 kubenswrapper[7518]: I0313 12:37:35.785192 7518 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:37:35.806277 master-0 kubenswrapper[7518]: I0313 12:37:35.806012 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:37:35.976580 master-0 kubenswrapper[7518]: I0313 12:37:35.976517 7518 generic.go:334] "Generic (PLEG): container finished" podID="887d261f-d07f-4ef0-a230-6568f47acf4d" containerID="5cd273040496c4efd233900f344ee1edf468c14a89e07cdd24f71287c4f355e0" exitCode=0 Mar 13 12:37:35.976964 master-0 kubenswrapper[7518]: I0313 12:37:35.976802 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" event={"ID":"887d261f-d07f-4ef0-a230-6568f47acf4d","Type":"ContainerDied","Data":"5cd273040496c4efd233900f344ee1edf468c14a89e07cdd24f71287c4f355e0"} Mar 13 12:37:35.977713 master-0 kubenswrapper[7518]: I0313 12:37:35.977571 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" event={"ID":"c642c18f-f960-4418-bcb7-df884f8f8ad5","Type":"ContainerStarted","Data":"a4591749866252389a99d8d167ffc17036d5b09d044139535fc2027e3c84b038"} Mar 13 12:37:36.069880 master-0 kubenswrapper[7518]: I0313 12:37:36.069515 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:36.075304 master-0 kubenswrapper[7518]: I0313 12:37:36.075250 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:37.990463 master-0 kubenswrapper[7518]: I0313 12:37:37.989858 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" event={"ID":"c642c18f-f960-4418-bcb7-df884f8f8ad5","Type":"ContainerStarted","Data":"5f9a44760abbfd1a103c3cb10f98bd42571ee701936731fde14d2460a8ada811"} Mar 13 12:37:38.012583 master-0 kubenswrapper[7518]: I0313 12:37:38.012344 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" podStartSLOduration=2.04920743 podStartE2EDuration="4.012298026s" podCreationTimestamp="2026-03-13 12:37:34 +0000 UTC" firstStartedPulling="2026-03-13 12:37:35.474460832 +0000 UTC m=+10.107530019" lastFinishedPulling="2026-03-13 12:37:37.437551388 +0000 UTC m=+12.070620615" observedRunningTime="2026-03-13 12:37:38.007103021 +0000 UTC m=+12.640172218" watchObservedRunningTime="2026-03-13 12:37:38.012298026 +0000 UTC m=+12.645367213" Mar 13 12:37:38.993467 master-0 kubenswrapper[7518]: I0313 12:37:38.993316 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" event={"ID":"887d261f-d07f-4ef0-a230-6568f47acf4d","Type":"ContainerStarted","Data":"ac30e49a3ae0e3ef59ed9c3728ae1c26bf004ec3b0fe4cf00ec315598faa9cf4"} Mar 13 12:37:39.999117 master-0 kubenswrapper[7518]: I0313 12:37:39.999061 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" event={"ID":"8c62b15f-001a-4b64-b85f-348aefde5d1b","Type":"ContainerStarted","Data":"50a86534e82c318c07e40c2eda167d8236002efbe5ace1ee2b94525f4f64c25b"} Mar 13 12:37:40.001778 master-0 kubenswrapper[7518]: I0313 12:37:40.001738 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" event={"ID":"77ef7e49-eb85-4f5e-94d3-a6a8619a6243","Type":"ContainerStarted","Data":"3add725e66228351c75651bb4a7357a39de488d2f8d517621841a317712aba3a"} Mar 13 12:37:40.995702 master-0 kubenswrapper[7518]: I0313 12:37:40.995644 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb"] Mar 13 12:37:40.996128 master-0 kubenswrapper[7518]: I0313 12:37:40.996108 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:40.998313 master-0 kubenswrapper[7518]: I0313 12:37:40.998270 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:37:40.998395 master-0 kubenswrapper[7518]: I0313 12:37:40.998312 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:37:40.998395 master-0 kubenswrapper[7518]: I0313 12:37:40.998278 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:37:40.998853 master-0 kubenswrapper[7518]: I0313 12:37:40.998825 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:37:40.999100 master-0 kubenswrapper[7518]: I0313 12:37:40.999076 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:37:41.000894 master-0 kubenswrapper[7518]: I0313 12:37:41.000855 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:37:41.008386 master-0 kubenswrapper[7518]: I0313 12:37:41.007718 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb"] Mar 13 12:37:41.181650 master-0 kubenswrapper[7518]: I0313 12:37:41.181543 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-serving-cert\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:41.181650 master-0 kubenswrapper[7518]: I0313 12:37:41.181617 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-client-ca\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:41.181922 master-0 kubenswrapper[7518]: I0313 12:37:41.181825 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46x2g\" (UniqueName: \"kubernetes.io/projected/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-kube-api-access-46x2g\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:41.181922 master-0 kubenswrapper[7518]: I0313 12:37:41.181875 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:41.181922 master-0 kubenswrapper[7518]: I0313 12:37:41.181904 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-config\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:41.282513 master-0 kubenswrapper[7518]: I0313 12:37:41.282418 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-serving-cert\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:41.282513 master-0 kubenswrapper[7518]: I0313 12:37:41.282506 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-client-ca\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:41.282814 master-0 kubenswrapper[7518]: I0313 12:37:41.282782 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46x2g\" (UniqueName: \"kubernetes.io/projected/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-kube-api-access-46x2g\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:41.282874 master-0 kubenswrapper[7518]: I0313 12:37:41.282843 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:41.282933 master-0 kubenswrapper[7518]: I0313 12:37:41.282915 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-config\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:41.283350 master-0 kubenswrapper[7518]: E0313 12:37:41.283067 7518 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:41.283350 master-0 kubenswrapper[7518]: E0313 12:37:41.283108 7518 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 13 12:37:41.283350 master-0 kubenswrapper[7518]: E0313 12:37:41.283110 7518 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 13 12:37:41.283350 master-0 kubenswrapper[7518]: E0313 12:37:41.283210 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-client-ca podName:0d1149a6-9d35-470a-aaf2-e5d2f1de19ba nodeName:}" failed. No retries permitted until 2026-03-13 12:37:41.783186659 +0000 UTC m=+16.416255936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-client-ca") pod "controller-manager-6f7fd6c796-sjzpb" (UID: "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba") : configmap "client-ca" not found Mar 13 12:37:41.283350 master-0 kubenswrapper[7518]: E0313 12:37:41.283240 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-proxy-ca-bundles podName:0d1149a6-9d35-470a-aaf2-e5d2f1de19ba nodeName:}" failed. No retries permitted until 2026-03-13 12:37:41.78322607 +0000 UTC m=+16.416295257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-sjzpb" (UID: "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba") : configmap "openshift-global-ca" not found Mar 13 12:37:41.283350 master-0 kubenswrapper[7518]: E0313 12:37:41.283256 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-config podName:0d1149a6-9d35-470a-aaf2-e5d2f1de19ba nodeName:}" failed. No retries permitted until 2026-03-13 12:37:41.78324845 +0000 UTC m=+16.416317767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-config") pod "controller-manager-6f7fd6c796-sjzpb" (UID: "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba") : configmap "config" not found Mar 13 12:37:41.283350 master-0 kubenswrapper[7518]: E0313 12:37:41.283252 7518 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:41.283678 master-0 kubenswrapper[7518]: E0313 12:37:41.283359 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-serving-cert podName:0d1149a6-9d35-470a-aaf2-e5d2f1de19ba nodeName:}" failed. No retries permitted until 2026-03-13 12:37:41.783328611 +0000 UTC m=+16.416397888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-serving-cert") pod "controller-manager-6f7fd6c796-sjzpb" (UID: "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba") : secret "serving-cert" not found Mar 13 12:37:41.309939 master-0 kubenswrapper[7518]: I0313 12:37:41.309629 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46x2g\" (UniqueName: \"kubernetes.io/projected/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-kube-api-access-46x2g\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:41.788049 master-0 kubenswrapper[7518]: I0313 12:37:41.787963 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:41.788321 master-0 kubenswrapper[7518]: E0313 12:37:41.788099 7518 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 13 12:37:41.788321 master-0 kubenswrapper[7518]: I0313 12:37:41.788176 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-config\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:41.788321 master-0 kubenswrapper[7518]: E0313 12:37:41.788190 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-proxy-ca-bundles podName:0d1149a6-9d35-470a-aaf2-e5d2f1de19ba nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.788171249 +0000 UTC m=+17.421240436 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-sjzpb" (UID: "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba") : configmap "openshift-global-ca" not found Mar 13 12:37:41.788321 master-0 kubenswrapper[7518]: I0313 12:37:41.788278 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-serving-cert\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:41.788321 master-0 kubenswrapper[7518]: E0313 12:37:41.788291 7518 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 13 12:37:41.788536 master-0 kubenswrapper[7518]: I0313 12:37:41.788325 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-client-ca\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:41.788536 master-0 kubenswrapper[7518]: E0313 12:37:41.788330 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-config podName:0d1149a6-9d35-470a-aaf2-e5d2f1de19ba nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.788319751 +0000 UTC m=+17.421388928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-config") pod "controller-manager-6f7fd6c796-sjzpb" (UID: "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba") : configmap "config" not found Mar 13 12:37:41.788536 master-0 kubenswrapper[7518]: E0313 12:37:41.788396 7518 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:41.788536 master-0 kubenswrapper[7518]: E0313 12:37:41.788436 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-serving-cert podName:0d1149a6-9d35-470a-aaf2-e5d2f1de19ba nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.788424863 +0000 UTC m=+17.421494050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-serving-cert") pod "controller-manager-6f7fd6c796-sjzpb" (UID: "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba") : secret "serving-cert" not found Mar 13 12:37:41.788536 master-0 kubenswrapper[7518]: E0313 12:37:41.788480 7518 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:41.788536 master-0 kubenswrapper[7518]: E0313 12:37:41.788531 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-client-ca podName:0d1149a6-9d35-470a-aaf2-e5d2f1de19ba nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.788513414 +0000 UTC m=+17.421582621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-client-ca") pod "controller-manager-6f7fd6c796-sjzpb" (UID: "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba") : configmap "client-ca" not found Mar 13 12:37:42.264889 master-0 kubenswrapper[7518]: I0313 12:37:42.264721 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb"] Mar 13 12:37:42.271278 master-0 kubenswrapper[7518]: E0313 12:37:42.271229 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" podUID="0d1149a6-9d35-470a-aaf2-e5d2f1de19ba" Mar 13 12:37:42.284478 master-0 kubenswrapper[7518]: I0313 12:37:42.284432 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4"] Mar 13 12:37:42.284943 master-0 kubenswrapper[7518]: I0313 12:37:42.284918 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:42.286304 master-0 kubenswrapper[7518]: I0313 12:37:42.286218 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 12:37:42.287491 master-0 kubenswrapper[7518]: I0313 12:37:42.287461 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 12:37:42.287571 master-0 kubenswrapper[7518]: I0313 12:37:42.287492 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 12:37:42.287571 master-0 kubenswrapper[7518]: I0313 12:37:42.287499 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 12:37:42.288100 master-0 kubenswrapper[7518]: I0313 12:37:42.288072 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 12:37:42.293987 master-0 kubenswrapper[7518]: I0313 12:37:42.293931 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4"] Mar 13 12:37:42.392450 master-0 kubenswrapper[7518]: I0313 12:37:42.392380 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:42.392450 master-0 kubenswrapper[7518]: I0313 12:37:42.392456 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb9t6\" (UniqueName: \"kubernetes.io/projected/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-kube-api-access-xb9t6\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:42.392686 master-0 kubenswrapper[7518]: I0313 12:37:42.392509 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:42.392686 master-0 kubenswrapper[7518]: I0313 12:37:42.392536 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:42.392686 master-0 kubenswrapper[7518]: I0313 12:37:42.392572 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-config\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:42.392686 master-0 kubenswrapper[7518]: I0313 12:37:42.392601 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:42.392686 master-0 kubenswrapper[7518]: I0313 12:37:42.392646 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:42.392893 master-0 kubenswrapper[7518]: E0313 12:37:42.392805 7518 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:42.392893 master-0 kubenswrapper[7518]: E0313 12:37:42.392847 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls podName:2f79578c-bbfb-4968-893a-730deb4c01f9 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.392832814 +0000 UTC m=+33.025902001 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls") pod "ingress-operator-677db989d6-ckl2j" (UID: "2f79578c-bbfb-4968-893a-730deb4c01f9") : secret "metrics-tls" not found Mar 13 12:37:42.393324 master-0 kubenswrapper[7518]: E0313 12:37:42.393287 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:37:42.393393 master-0 kubenswrapper[7518]: E0313 12:37:42.393329 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert podName:10944f9c-8ce9-44e6-9c36-a0ea19d8cae3 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.393321041 +0000 UTC m=+33.026390228 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert") pod "catalog-operator-7d9c49f57b-tlnkd" (UID: "10944f9c-8ce9-44e6-9c36-a0ea19d8cae3") : secret "catalog-operator-serving-cert" not found Mar 13 12:37:42.393393 master-0 kubenswrapper[7518]: E0313 12:37:42.393374 7518 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:42.393393 master-0 kubenswrapper[7518]: E0313 12:37:42.393391 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert podName:f39d7f76-0075-44c3-9101-eb2607cb176a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.393386162 +0000 UTC m=+33.026455349 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert") pod "cluster-version-operator-745944c6b7-mbjxt" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:42.493503 master-0 kubenswrapper[7518]: I0313 12:37:42.493419 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-config\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:42.493503 master-0 kubenswrapper[7518]: I0313 12:37:42.493496 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:42.493503 master-0 kubenswrapper[7518]: I0313 12:37:42.493517 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:42.493817 master-0 kubenswrapper[7518]: I0313 12:37:42.493539 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:42.493817 master-0 kubenswrapper[7518]: I0313 12:37:42.493554 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:42.493817 master-0 kubenswrapper[7518]: I0313 12:37:42.493571 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:42.493817 master-0 kubenswrapper[7518]: I0313 12:37:42.493592 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:42.493817 master-0 kubenswrapper[7518]: I0313 12:37:42.493615 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:42.493817 master-0 kubenswrapper[7518]: I0313 12:37:42.493650 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:42.493817 master-0 kubenswrapper[7518]: I0313 12:37:42.493667 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:42.493817 master-0 kubenswrapper[7518]: I0313 12:37:42.493682 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:42.493817 master-0 kubenswrapper[7518]: I0313 12:37:42.493702 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:42.493817 master-0 kubenswrapper[7518]: I0313 12:37:42.493724 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb9t6\" (UniqueName: \"kubernetes.io/projected/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-kube-api-access-xb9t6\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:42.493817 master-0 kubenswrapper[7518]: I0313 12:37:42.493748 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:42.494252 master-0 kubenswrapper[7518]: E0313 12:37:42.493854 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:37:42.494252 master-0 kubenswrapper[7518]: E0313 12:37:42.493896 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert podName:d5a19b80-d488-46d3-a4a8-0b80361077e1 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.493883459 +0000 UTC m=+33.126952646 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert") pod "olm-operator-d64cfc9db-rfqb9" (UID: "d5a19b80-d488-46d3-a4a8-0b80361077e1") : secret "olm-operator-serving-cert" not found Mar 13 12:37:42.494361 master-0 kubenswrapper[7518]: E0313 12:37:42.494291 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:37:42.494361 master-0 kubenswrapper[7518]: E0313 12:37:42.494317 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert podName:3d653e1a-5903-4a02-9357-df145f028c0d nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.494309175 +0000 UTC m=+33.127378362 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-669qk" (UID: "3d653e1a-5903-4a02-9357-df145f028c0d") : secret "package-server-manager-serving-cert" not found Mar 13 12:37:42.494361 master-0 kubenswrapper[7518]: E0313 12:37:42.494349 7518 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:37:42.494498 master-0 kubenswrapper[7518]: E0313 12:37:42.494369 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics podName:d3d998ee-b26f-4e30-83bc-f94f8c68060a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.494363275 +0000 UTC m=+33.127432462 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7qhr4" (UID: "d3d998ee-b26f-4e30-83bc-f94f8c68060a") : secret "marketplace-operator-metrics" not found Mar 13 12:37:42.494498 master-0 kubenswrapper[7518]: E0313 12:37:42.494398 7518 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:37:42.494498 master-0 kubenswrapper[7518]: E0313 12:37:42.494415 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.494409666 +0000 UTC m=+33.127478853 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "node-tuning-operator-tls" not found Mar 13 12:37:42.494498 master-0 kubenswrapper[7518]: E0313 12:37:42.494442 7518 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:42.494498 master-0 kubenswrapper[7518]: I0313 12:37:42.494447 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-config\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:42.494498 master-0 kubenswrapper[7518]: E0313 12:37:42.494458 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert podName:75651bfd-ceaf-4bda-95a3-68ca11ec5abe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.994453207 +0000 UTC m=+17.627522394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert") pod "route-controller-manager-54f4b89bbb-sb5x4" (UID: "75651bfd-ceaf-4bda-95a3-68ca11ec5abe") : secret "serving-cert" not found Mar 13 12:37:42.494498 master-0 kubenswrapper[7518]: E0313 12:37:42.494502 7518 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:42.494800 master-0 kubenswrapper[7518]: E0313 12:37:42.494518 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert podName:3020d236-03e0-4916-97dd-f1085632ca43 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.494513567 +0000 UTC m=+33.127582754 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-cz8pc" (UID: "3020d236-03e0-4916-97dd-f1085632ca43") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:37:42.494800 master-0 kubenswrapper[7518]: E0313 12:37:42.494542 7518 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:37:42.494800 master-0 kubenswrapper[7518]: E0313 12:37:42.494550 7518 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:42.494800 master-0 kubenswrapper[7518]: E0313 12:37:42.494566 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls podName:604456a0-4997-43bc-87ef-283a002111fe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.494561318 +0000 UTC m=+33.127630505 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-zwtdz" (UID: "604456a0-4997-43bc-87ef-283a002111fe") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:42.494800 master-0 kubenswrapper[7518]: E0313 12:37:42.494575 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs podName:29b6aa89-0416-4595-9deb-10b290521d86 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.494570588 +0000 UTC m=+33.127639775 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs") pod "network-metrics-daemon-r9lmb" (UID: "29b6aa89-0416-4595-9deb-10b290521d86") : secret "metrics-daemon-secret" not found Mar 13 12:37:42.494800 master-0 kubenswrapper[7518]: E0313 12:37:42.494609 7518 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:37:42.494800 master-0 kubenswrapper[7518]: E0313 12:37:42.494642 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls podName:bcf05594-4c10-4b54-a47c-d55e323f1f87 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.494628909 +0000 UTC m=+33.127698096 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-q287n" (UID: "bcf05594-4c10-4b54-a47c-d55e323f1f87") : secret "image-registry-operator-tls" not found Mar 13 12:37:42.494800 master-0 kubenswrapper[7518]: E0313 12:37:42.494687 7518 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:37:42.494800 master-0 kubenswrapper[7518]: E0313 12:37:42.494713 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls podName:13f32761-b386-4f93-b3c0-b16ea53d338a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.49470486 +0000 UTC m=+33.127774047 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls") pod "dns-operator-589895fbb7-mmwk7" (UID: "13f32761-b386-4f93-b3c0-b16ea53d338a") : secret "metrics-tls" not found Mar 13 12:37:42.494800 master-0 kubenswrapper[7518]: E0313 12:37:42.494739 7518 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:37:42.494800 master-0 kubenswrapper[7518]: E0313 12:37:42.494747 7518 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:42.494800 master-0 kubenswrapper[7518]: E0313 12:37:42.494760 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs podName:4c0b18db-06ad-4d58-a353-f6fd96309dea nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.494753721 +0000 UTC m=+33.127822908 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs") pod "multus-admission-controller-8d675b596-96gds" (UID: "4c0b18db-06ad-4d58-a353-f6fd96309dea") : secret "multus-admission-controller-secret" not found Mar 13 12:37:42.494800 master-0 kubenswrapper[7518]: E0313 12:37:42.494773 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca podName:75651bfd-ceaf-4bda-95a3-68ca11ec5abe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:42.994764541 +0000 UTC m=+17.627833728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca") pod "route-controller-manager-54f4b89bbb-sb5x4" (UID: "75651bfd-ceaf-4bda-95a3-68ca11ec5abe") : configmap "client-ca" not found Mar 13 12:37:42.516540 master-0 kubenswrapper[7518]: I0313 12:37:42.516486 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb9t6\" (UniqueName: \"kubernetes.io/projected/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-kube-api-access-xb9t6\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:42.796118 master-0 kubenswrapper[7518]: I0313 12:37:42.796005 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:42.796118 master-0 kubenswrapper[7518]: I0313 12:37:42.796061 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-config\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:42.796118 master-0 kubenswrapper[7518]: I0313 12:37:42.796098 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-serving-cert\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:42.796118 master-0 kubenswrapper[7518]: I0313 12:37:42.796114 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-client-ca\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:42.796377 master-0 kubenswrapper[7518]: E0313 12:37:42.796264 7518 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:42.796377 master-0 kubenswrapper[7518]: E0313 12:37:42.796311 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-client-ca podName:0d1149a6-9d35-470a-aaf2-e5d2f1de19ba nodeName:}" failed. No retries permitted until 2026-03-13 12:37:44.796298473 +0000 UTC m=+19.429367660 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-client-ca") pod "controller-manager-6f7fd6c796-sjzpb" (UID: "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba") : configmap "client-ca" not found Mar 13 12:37:42.797817 master-0 kubenswrapper[7518]: I0313 12:37:42.797774 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:42.798494 master-0 kubenswrapper[7518]: I0313 12:37:42.798460 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-config\") pod \"controller-manager-6f7fd6c796-sjzpb\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:42.798567 master-0 kubenswrapper[7518]: E0313 12:37:42.798552 7518 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:42.798601 master-0 kubenswrapper[7518]: E0313 12:37:42.798594 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-serving-cert podName:0d1149a6-9d35-470a-aaf2-e5d2f1de19ba nodeName:}" failed. No retries permitted until 2026-03-13 12:37:44.798581815 +0000 UTC m=+19.431651002 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-serving-cert") pod "controller-manager-6f7fd6c796-sjzpb" (UID: "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba") : secret "serving-cert" not found Mar 13 12:37:42.997996 master-0 kubenswrapper[7518]: I0313 12:37:42.997910 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:42.998265 master-0 kubenswrapper[7518]: E0313 12:37:42.998082 7518 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:42.998265 master-0 kubenswrapper[7518]: E0313 12:37:42.998180 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert podName:75651bfd-ceaf-4bda-95a3-68ca11ec5abe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:43.998161638 +0000 UTC m=+18.631230825 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert") pod "route-controller-manager-54f4b89bbb-sb5x4" (UID: "75651bfd-ceaf-4bda-95a3-68ca11ec5abe") : secret "serving-cert" not found Mar 13 12:37:42.998265 master-0 kubenswrapper[7518]: I0313 12:37:42.998222 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:42.998389 master-0 kubenswrapper[7518]: E0313 12:37:42.998283 7518 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:42.998389 master-0 kubenswrapper[7518]: E0313 12:37:42.998306 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca podName:75651bfd-ceaf-4bda-95a3-68ca11ec5abe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:43.99830006 +0000 UTC m=+18.631369247 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca") pod "route-controller-manager-54f4b89bbb-sb5x4" (UID: "75651bfd-ceaf-4bda-95a3-68ca11ec5abe") : configmap "client-ca" not found Mar 13 12:37:43.014467 master-0 kubenswrapper[7518]: I0313 12:37:43.014409 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" event={"ID":"034aaf8e-95df-4171-bae4-e7abe58d15f7","Type":"ContainerStarted","Data":"c27448fad258056de304ba3c30b9268468cc1c542046d6c37c21797efa146b54"} Mar 13 12:37:43.014467 master-0 kubenswrapper[7518]: I0313 12:37:43.014451 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:43.025179 master-0 kubenswrapper[7518]: I0313 12:37:43.025117 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:43.199684 master-0 kubenswrapper[7518]: I0313 12:37:43.199512 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-proxy-ca-bundles\") pod \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " Mar 13 12:37:43.199684 master-0 kubenswrapper[7518]: I0313 12:37:43.199598 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46x2g\" (UniqueName: \"kubernetes.io/projected/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-kube-api-access-46x2g\") pod \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " Mar 13 12:37:43.200003 master-0 kubenswrapper[7518]: I0313 12:37:43.199792 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-config\") pod \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\" (UID: \"0d1149a6-9d35-470a-aaf2-e5d2f1de19ba\") " Mar 13 12:37:43.200294 master-0 kubenswrapper[7518]: I0313 12:37:43.200089 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba" (UID: "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:37:43.200426 master-0 kubenswrapper[7518]: I0313 12:37:43.200338 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-config" (OuterVolumeSpecName: "config") pod "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba" (UID: "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:37:43.202214 master-0 kubenswrapper[7518]: I0313 12:37:43.202162 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-kube-api-access-46x2g" (OuterVolumeSpecName: "kube-api-access-46x2g") pod "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba" (UID: "0d1149a6-9d35-470a-aaf2-e5d2f1de19ba"). InnerVolumeSpecName "kube-api-access-46x2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:37:43.300913 master-0 kubenswrapper[7518]: I0313 12:37:43.300861 7518 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:43.300913 master-0 kubenswrapper[7518]: I0313 12:37:43.300904 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46x2g\" (UniqueName: \"kubernetes.io/projected/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-kube-api-access-46x2g\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:43.300913 master-0 kubenswrapper[7518]: I0313 12:37:43.300919 7518 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:44.019784 master-0 kubenswrapper[7518]: I0313 12:37:44.019747 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb" Mar 13 12:37:44.020224 master-0 kubenswrapper[7518]: I0313 12:37:44.020196 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" event={"ID":"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa","Type":"ContainerStarted","Data":"814a1adb650838a7837cee0a591e9eba8984a73367ffe7b1b579ae47de6fda2a"} Mar 13 12:37:44.021336 master-0 kubenswrapper[7518]: I0313 12:37:44.021299 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:44.021441 master-0 kubenswrapper[7518]: E0313 12:37:44.021420 7518 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:44.021625 master-0 kubenswrapper[7518]: E0313 12:37:44.021595 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert podName:75651bfd-ceaf-4bda-95a3-68ca11ec5abe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:46.021463119 +0000 UTC m=+20.654532306 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert") pod "route-controller-manager-54f4b89bbb-sb5x4" (UID: "75651bfd-ceaf-4bda-95a3-68ca11ec5abe") : secret "serving-cert" not found Mar 13 12:37:44.021698 master-0 kubenswrapper[7518]: I0313 12:37:44.021677 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:44.021972 master-0 kubenswrapper[7518]: E0313 12:37:44.021939 7518 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:44.022031 master-0 kubenswrapper[7518]: E0313 12:37:44.021980 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca podName:75651bfd-ceaf-4bda-95a3-68ca11ec5abe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:46.021970436 +0000 UTC m=+20.655039623 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca") pod "route-controller-manager-54f4b89bbb-sb5x4" (UID: "75651bfd-ceaf-4bda-95a3-68ca11ec5abe") : configmap "client-ca" not found Mar 13 12:37:44.077792 master-0 kubenswrapper[7518]: I0313 12:37:44.077408 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6"] Mar 13 12:37:44.078964 master-0 kubenswrapper[7518]: I0313 12:37:44.078167 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb"] Mar 13 12:37:44.078964 master-0 kubenswrapper[7518]: I0313 12:37:44.078257 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.083225 master-0 kubenswrapper[7518]: I0313 12:37:44.081963 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:37:44.083225 master-0 kubenswrapper[7518]: I0313 12:37:44.082208 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:37:44.083225 master-0 kubenswrapper[7518]: I0313 12:37:44.082404 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:37:44.083225 master-0 kubenswrapper[7518]: I0313 12:37:44.082701 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:37:44.083225 master-0 kubenswrapper[7518]: I0313 12:37:44.082808 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:37:44.083510 master-0 kubenswrapper[7518]: I0313 12:37:44.083258 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-sjzpb"] Mar 13 12:37:44.099504 master-0 kubenswrapper[7518]: I0313 12:37:44.094924 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:37:44.099504 master-0 kubenswrapper[7518]: I0313 12:37:44.097375 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6"] Mar 13 12:37:44.127048 master-0 kubenswrapper[7518]: I0313 12:37:44.125969 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854220e0-eb13-42fe-a80e-b87c309f027a-serving-cert\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.127048 master-0 kubenswrapper[7518]: I0313 12:37:44.126561 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzxwd\" (UniqueName: \"kubernetes.io/projected/854220e0-eb13-42fe-a80e-b87c309f027a-kube-api-access-gzxwd\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.127048 master-0 kubenswrapper[7518]: I0313 12:37:44.126797 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-client-ca\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.127048 master-0 kubenswrapper[7518]: I0313 12:37:44.126844 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-config\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.127048 master-0 kubenswrapper[7518]: I0313 12:37:44.126904 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-proxy-ca-bundles\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.127048 master-0 kubenswrapper[7518]: I0313 12:37:44.127037 7518 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:44.127048 master-0 kubenswrapper[7518]: I0313 12:37:44.127055 7518 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:44.158044 master-0 kubenswrapper[7518]: I0313 12:37:44.155178 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6"] Mar 13 12:37:44.158044 master-0 kubenswrapper[7518]: E0313 12:37:44.155604 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-gzxwd proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" podUID="854220e0-eb13-42fe-a80e-b87c309f027a" Mar 13 12:37:44.228686 master-0 kubenswrapper[7518]: I0313 12:37:44.228621 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854220e0-eb13-42fe-a80e-b87c309f027a-serving-cert\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.228899 master-0 kubenswrapper[7518]: I0313 12:37:44.228697 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzxwd\" (UniqueName: \"kubernetes.io/projected/854220e0-eb13-42fe-a80e-b87c309f027a-kube-api-access-gzxwd\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.228899 master-0 kubenswrapper[7518]: E0313 12:37:44.228824 7518 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:44.228899 master-0 kubenswrapper[7518]: E0313 12:37:44.228895 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/854220e0-eb13-42fe-a80e-b87c309f027a-serving-cert podName:854220e0-eb13-42fe-a80e-b87c309f027a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:44.728875814 +0000 UTC m=+19.361945011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/854220e0-eb13-42fe-a80e-b87c309f027a-serving-cert") pod "controller-manager-5b8775d4fd-2s9f6" (UID: "854220e0-eb13-42fe-a80e-b87c309f027a") : secret "serving-cert" not found Mar 13 12:37:44.229030 master-0 kubenswrapper[7518]: I0313 12:37:44.228987 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-client-ca\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.229069 master-0 kubenswrapper[7518]: I0313 12:37:44.229033 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-config\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.229097 master-0 kubenswrapper[7518]: I0313 12:37:44.229070 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-proxy-ca-bundles\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.230744 master-0 kubenswrapper[7518]: E0313 12:37:44.230390 7518 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:44.230744 master-0 kubenswrapper[7518]: E0313 12:37:44.230450 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-client-ca podName:854220e0-eb13-42fe-a80e-b87c309f027a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:44.730437236 +0000 UTC m=+19.363506423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-client-ca") pod "controller-manager-5b8775d4fd-2s9f6" (UID: "854220e0-eb13-42fe-a80e-b87c309f027a") : configmap "client-ca" not found Mar 13 12:37:44.230744 master-0 kubenswrapper[7518]: I0313 12:37:44.230402 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-proxy-ca-bundles\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.231097 master-0 kubenswrapper[7518]: I0313 12:37:44.230968 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-config\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.249885 master-0 kubenswrapper[7518]: I0313 12:37:44.249831 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzxwd\" (UniqueName: \"kubernetes.io/projected/854220e0-eb13-42fe-a80e-b87c309f027a-kube-api-access-gzxwd\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.734887 master-0 kubenswrapper[7518]: I0313 12:37:44.734825 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854220e0-eb13-42fe-a80e-b87c309f027a-serving-cert\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.735617 master-0 kubenswrapper[7518]: E0313 12:37:44.734948 7518 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:44.735617 master-0 kubenswrapper[7518]: I0313 12:37:44.735024 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-client-ca\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:44.735617 master-0 kubenswrapper[7518]: E0313 12:37:44.735044 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/854220e0-eb13-42fe-a80e-b87c309f027a-serving-cert podName:854220e0-eb13-42fe-a80e-b87c309f027a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:45.73502101 +0000 UTC m=+20.368090237 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/854220e0-eb13-42fe-a80e-b87c309f027a-serving-cert") pod "controller-manager-5b8775d4fd-2s9f6" (UID: "854220e0-eb13-42fe-a80e-b87c309f027a") : secret "serving-cert" not found Mar 13 12:37:44.735617 master-0 kubenswrapper[7518]: E0313 12:37:44.735112 7518 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:44.735617 master-0 kubenswrapper[7518]: E0313 12:37:44.735201 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-client-ca podName:854220e0-eb13-42fe-a80e-b87c309f027a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:45.735179253 +0000 UTC m=+20.368248540 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-client-ca") pod "controller-manager-5b8775d4fd-2s9f6" (UID: "854220e0-eb13-42fe-a80e-b87c309f027a") : configmap "client-ca" not found Mar 13 12:37:45.025422 master-0 kubenswrapper[7518]: I0313 12:37:45.025377 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" event={"ID":"15b592d6-3c48-45d4-9172-d28632ae8995","Type":"ContainerStarted","Data":"c3cc4d20a3385510f2813df129cea65d1b836444e4586b47995a2d6b48933eba"} Mar 13 12:37:45.027271 master-0 kubenswrapper[7518]: I0313 12:37:45.027251 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:45.027507 master-0 kubenswrapper[7518]: I0313 12:37:45.027491 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" event={"ID":"f5775266-5e58-44ed-81cb-dfe3faf38add","Type":"ContainerStarted","Data":"b93548b4b4252ac17adfb04acbab06411e860b90fed7b1160d6dcde46321cd0a"} Mar 13 12:37:45.032871 master-0 kubenswrapper[7518]: I0313 12:37:45.032851 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:45.140642 master-0 kubenswrapper[7518]: I0313 12:37:45.140591 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-proxy-ca-bundles\") pod \"854220e0-eb13-42fe-a80e-b87c309f027a\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " Mar 13 12:37:45.140642 master-0 kubenswrapper[7518]: I0313 12:37:45.140640 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzxwd\" (UniqueName: \"kubernetes.io/projected/854220e0-eb13-42fe-a80e-b87c309f027a-kube-api-access-gzxwd\") pod \"854220e0-eb13-42fe-a80e-b87c309f027a\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " Mar 13 12:37:45.140924 master-0 kubenswrapper[7518]: I0313 12:37:45.140678 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-config\") pod \"854220e0-eb13-42fe-a80e-b87c309f027a\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " Mar 13 12:37:45.142108 master-0 kubenswrapper[7518]: I0313 12:37:45.141691 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "854220e0-eb13-42fe-a80e-b87c309f027a" (UID: "854220e0-eb13-42fe-a80e-b87c309f027a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:37:45.142108 master-0 kubenswrapper[7518]: I0313 12:37:45.141821 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-config" (OuterVolumeSpecName: "config") pod "854220e0-eb13-42fe-a80e-b87c309f027a" (UID: "854220e0-eb13-42fe-a80e-b87c309f027a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:37:45.144701 master-0 kubenswrapper[7518]: I0313 12:37:45.144666 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854220e0-eb13-42fe-a80e-b87c309f027a-kube-api-access-gzxwd" (OuterVolumeSpecName: "kube-api-access-gzxwd") pod "854220e0-eb13-42fe-a80e-b87c309f027a" (UID: "854220e0-eb13-42fe-a80e-b87c309f027a"). InnerVolumeSpecName "kube-api-access-gzxwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:37:45.241851 master-0 kubenswrapper[7518]: I0313 12:37:45.241786 7518 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:45.241851 master-0 kubenswrapper[7518]: I0313 12:37:45.241822 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzxwd\" (UniqueName: \"kubernetes.io/projected/854220e0-eb13-42fe-a80e-b87c309f027a-kube-api-access-gzxwd\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:45.241851 master-0 kubenswrapper[7518]: I0313 12:37:45.241837 7518 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:45.362786 master-0 kubenswrapper[7518]: I0313 12:37:45.362685 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-4pksg"] Mar 13 12:37:45.363194 master-0 kubenswrapper[7518]: I0313 12:37:45.363134 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:37:45.366435 master-0 kubenswrapper[7518]: I0313 12:37:45.365764 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 12:37:45.366435 master-0 kubenswrapper[7518]: I0313 12:37:45.365995 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 12:37:45.366435 master-0 kubenswrapper[7518]: I0313 12:37:45.366160 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 12:37:45.366435 master-0 kubenswrapper[7518]: I0313 12:37:45.366324 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 12:37:45.375165 master-0 kubenswrapper[7518]: I0313 12:37:45.375107 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-4pksg"] Mar 13 12:37:45.544792 master-0 kubenswrapper[7518]: I0313 12:37:45.544739 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c0f3e81c-f61d-430a-98e8-82e3b283fc73-signing-cabundle\") pod \"service-ca-84bfdbbb7f-4pksg\" (UID: \"c0f3e81c-f61d-430a-98e8-82e3b283fc73\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:37:45.544973 master-0 kubenswrapper[7518]: I0313 12:37:45.544835 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65ts9\" (UniqueName: \"kubernetes.io/projected/c0f3e81c-f61d-430a-98e8-82e3b283fc73-kube-api-access-65ts9\") pod \"service-ca-84bfdbbb7f-4pksg\" (UID: \"c0f3e81c-f61d-430a-98e8-82e3b283fc73\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:37:45.545037 master-0 kubenswrapper[7518]: I0313 12:37:45.544999 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c0f3e81c-f61d-430a-98e8-82e3b283fc73-signing-key\") pod \"service-ca-84bfdbbb7f-4pksg\" (UID: \"c0f3e81c-f61d-430a-98e8-82e3b283fc73\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:37:45.610453 master-0 kubenswrapper[7518]: I0313 12:37:45.604254 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d1149a6-9d35-470a-aaf2-e5d2f1de19ba" path="/var/lib/kubelet/pods/0d1149a6-9d35-470a-aaf2-e5d2f1de19ba/volumes" Mar 13 12:37:45.646162 master-0 kubenswrapper[7518]: I0313 12:37:45.646097 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c0f3e81c-f61d-430a-98e8-82e3b283fc73-signing-cabundle\") pod \"service-ca-84bfdbbb7f-4pksg\" (UID: \"c0f3e81c-f61d-430a-98e8-82e3b283fc73\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:37:45.646481 master-0 kubenswrapper[7518]: I0313 12:37:45.646464 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65ts9\" (UniqueName: \"kubernetes.io/projected/c0f3e81c-f61d-430a-98e8-82e3b283fc73-kube-api-access-65ts9\") pod \"service-ca-84bfdbbb7f-4pksg\" (UID: \"c0f3e81c-f61d-430a-98e8-82e3b283fc73\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:37:45.646797 master-0 kubenswrapper[7518]: I0313 12:37:45.646780 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c0f3e81c-f61d-430a-98e8-82e3b283fc73-signing-key\") pod \"service-ca-84bfdbbb7f-4pksg\" (UID: \"c0f3e81c-f61d-430a-98e8-82e3b283fc73\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:37:45.648560 master-0 kubenswrapper[7518]: I0313 12:37:45.648131 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c0f3e81c-f61d-430a-98e8-82e3b283fc73-signing-cabundle\") pod \"service-ca-84bfdbbb7f-4pksg\" (UID: \"c0f3e81c-f61d-430a-98e8-82e3b283fc73\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:37:45.650748 master-0 kubenswrapper[7518]: I0313 12:37:45.650682 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c0f3e81c-f61d-430a-98e8-82e3b283fc73-signing-key\") pod \"service-ca-84bfdbbb7f-4pksg\" (UID: \"c0f3e81c-f61d-430a-98e8-82e3b283fc73\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:37:45.665324 master-0 kubenswrapper[7518]: I0313 12:37:45.665242 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65ts9\" (UniqueName: \"kubernetes.io/projected/c0f3e81c-f61d-430a-98e8-82e3b283fc73-kube-api-access-65ts9\") pod \"service-ca-84bfdbbb7f-4pksg\" (UID: \"c0f3e81c-f61d-430a-98e8-82e3b283fc73\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:37:45.685869 master-0 kubenswrapper[7518]: I0313 12:37:45.685802 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:37:45.749312 master-0 kubenswrapper[7518]: I0313 12:37:45.749253 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-client-ca\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:45.749803 master-0 kubenswrapper[7518]: I0313 12:37:45.749768 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854220e0-eb13-42fe-a80e-b87c309f027a-serving-cert\") pod \"controller-manager-5b8775d4fd-2s9f6\" (UID: \"854220e0-eb13-42fe-a80e-b87c309f027a\") " pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:45.750253 master-0 kubenswrapper[7518]: E0313 12:37:45.749880 7518 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:45.750253 master-0 kubenswrapper[7518]: E0313 12:37:45.749933 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/854220e0-eb13-42fe-a80e-b87c309f027a-serving-cert podName:854220e0-eb13-42fe-a80e-b87c309f027a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:47.74991875 +0000 UTC m=+22.382987937 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/854220e0-eb13-42fe-a80e-b87c309f027a-serving-cert") pod "controller-manager-5b8775d4fd-2s9f6" (UID: "854220e0-eb13-42fe-a80e-b87c309f027a") : secret "serving-cert" not found Mar 13 12:37:45.750253 master-0 kubenswrapper[7518]: E0313 12:37:45.749999 7518 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:45.750253 master-0 kubenswrapper[7518]: E0313 12:37:45.750065 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-client-ca podName:854220e0-eb13-42fe-a80e-b87c309f027a nodeName:}" failed. No retries permitted until 2026-03-13 12:37:47.750046832 +0000 UTC m=+22.383116069 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-client-ca") pod "controller-manager-5b8775d4fd-2s9f6" (UID: "854220e0-eb13-42fe-a80e-b87c309f027a") : configmap "client-ca" not found Mar 13 12:37:45.945930 master-0 kubenswrapper[7518]: I0313 12:37:45.945621 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-4pksg"] Mar 13 12:37:46.031654 master-0 kubenswrapper[7518]: I0313 12:37:46.031594 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" event={"ID":"c0f3e81c-f61d-430a-98e8-82e3b283fc73","Type":"ContainerStarted","Data":"3f65f8f162278830720a8d0df1f4af830419eb457612c65a706c42ccf3c12587"} Mar 13 12:37:46.031654 master-0 kubenswrapper[7518]: I0313 12:37:46.031621 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6" Mar 13 12:37:46.055117 master-0 kubenswrapper[7518]: I0313 12:37:46.055050 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:46.055354 master-0 kubenswrapper[7518]: I0313 12:37:46.055167 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:46.055519 master-0 kubenswrapper[7518]: E0313 12:37:46.055333 7518 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:46.055519 master-0 kubenswrapper[7518]: E0313 12:37:46.055447 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert podName:75651bfd-ceaf-4bda-95a3-68ca11ec5abe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:50.055419868 +0000 UTC m=+24.688489085 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert") pod "route-controller-manager-54f4b89bbb-sb5x4" (UID: "75651bfd-ceaf-4bda-95a3-68ca11ec5abe") : secret "serving-cert" not found Mar 13 12:37:46.055628 master-0 kubenswrapper[7518]: E0313 12:37:46.055559 7518 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:46.057679 master-0 kubenswrapper[7518]: E0313 12:37:46.055820 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca podName:75651bfd-ceaf-4bda-95a3-68ca11ec5abe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:50.055796614 +0000 UTC m=+24.688865801 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca") pod "route-controller-manager-54f4b89bbb-sb5x4" (UID: "75651bfd-ceaf-4bda-95a3-68ca11ec5abe") : configmap "client-ca" not found Mar 13 12:37:46.090178 master-0 kubenswrapper[7518]: I0313 12:37:46.090106 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-988c89bfb-rl6tb"] Mar 13 12:37:46.100217 master-0 kubenswrapper[7518]: I0313 12:37:46.093526 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.100217 master-0 kubenswrapper[7518]: I0313 12:37:46.095725 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:37:46.100217 master-0 kubenswrapper[7518]: I0313 12:37:46.096121 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:37:46.100217 master-0 kubenswrapper[7518]: I0313 12:37:46.096321 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:37:46.100217 master-0 kubenswrapper[7518]: I0313 12:37:46.096575 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6"] Mar 13 12:37:46.105463 master-0 kubenswrapper[7518]: I0313 12:37:46.102269 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:37:46.105463 master-0 kubenswrapper[7518]: I0313 12:37:46.102493 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b8775d4fd-2s9f6"] Mar 13 12:37:46.105463 master-0 kubenswrapper[7518]: I0313 12:37:46.102567 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:37:46.105778 master-0 kubenswrapper[7518]: I0313 12:37:46.105723 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-988c89bfb-rl6tb"] Mar 13 12:37:46.105874 master-0 kubenswrapper[7518]: I0313 12:37:46.105788 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:37:46.258218 master-0 kubenswrapper[7518]: I0313 12:37:46.257852 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.258218 master-0 kubenswrapper[7518]: I0313 12:37:46.257906 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.258218 master-0 kubenswrapper[7518]: I0313 12:37:46.257988 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsf79\" (UniqueName: \"kubernetes.io/projected/7ac704c6-e2a9-4a53-99d5-5be1db776558-kube-api-access-hsf79\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.258218 master-0 kubenswrapper[7518]: I0313 12:37:46.258029 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-config\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.258218 master-0 kubenswrapper[7518]: I0313 12:37:46.258074 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-proxy-ca-bundles\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.258218 master-0 kubenswrapper[7518]: I0313 12:37:46.258124 7518 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854220e0-eb13-42fe-a80e-b87c309f027a-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:46.258218 master-0 kubenswrapper[7518]: I0313 12:37:46.258147 7518 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854220e0-eb13-42fe-a80e-b87c309f027a-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:46.359383 master-0 kubenswrapper[7518]: I0313 12:37:46.359327 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.359383 master-0 kubenswrapper[7518]: I0313 12:37:46.359376 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.359594 master-0 kubenswrapper[7518]: E0313 12:37:46.359511 7518 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:46.359623 master-0 kubenswrapper[7518]: E0313 12:37:46.359599 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca podName:7ac704c6-e2a9-4a53-99d5-5be1db776558 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:46.859577107 +0000 UTC m=+21.492646294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca") pod "controller-manager-988c89bfb-rl6tb" (UID: "7ac704c6-e2a9-4a53-99d5-5be1db776558") : configmap "client-ca" not found Mar 13 12:37:46.361892 master-0 kubenswrapper[7518]: E0313 12:37:46.359660 7518 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:46.361892 master-0 kubenswrapper[7518]: I0313 12:37:46.359668 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsf79\" (UniqueName: \"kubernetes.io/projected/7ac704c6-e2a9-4a53-99d5-5be1db776558-kube-api-access-hsf79\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.361892 master-0 kubenswrapper[7518]: E0313 12:37:46.359719 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert podName:7ac704c6-e2a9-4a53-99d5-5be1db776558 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:46.859705569 +0000 UTC m=+21.492774756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert") pod "controller-manager-988c89bfb-rl6tb" (UID: "7ac704c6-e2a9-4a53-99d5-5be1db776558") : secret "serving-cert" not found Mar 13 12:37:46.361892 master-0 kubenswrapper[7518]: I0313 12:37:46.359877 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-config\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.361892 master-0 kubenswrapper[7518]: I0313 12:37:46.360086 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-proxy-ca-bundles\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.361892 master-0 kubenswrapper[7518]: I0313 12:37:46.361121 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-config\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.361892 master-0 kubenswrapper[7518]: I0313 12:37:46.361501 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-proxy-ca-bundles\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.382375 master-0 kubenswrapper[7518]: I0313 12:37:46.382298 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsf79\" (UniqueName: \"kubernetes.io/projected/7ac704c6-e2a9-4a53-99d5-5be1db776558-kube-api-access-hsf79\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.771187 master-0 kubenswrapper[7518]: I0313 12:37:46.771019 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-7pcdp"] Mar 13 12:37:46.772443 master-0 kubenswrapper[7518]: I0313 12:37:46.771990 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-7pcdp" Mar 13 12:37:46.774798 master-0 kubenswrapper[7518]: I0313 12:37:46.774731 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 12:37:46.774934 master-0 kubenswrapper[7518]: I0313 12:37:46.774812 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 12:37:46.780259 master-0 kubenswrapper[7518]: I0313 12:37:46.780213 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-7pcdp"] Mar 13 12:37:46.864803 master-0 kubenswrapper[7518]: I0313 12:37:46.864712 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.864993 master-0 kubenswrapper[7518]: E0313 12:37:46.864845 7518 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:46.864993 master-0 kubenswrapper[7518]: E0313 12:37:46.864927 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca podName:7ac704c6-e2a9-4a53-99d5-5be1db776558 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:47.864909682 +0000 UTC m=+22.497978869 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca") pod "controller-manager-988c89bfb-rl6tb" (UID: "7ac704c6-e2a9-4a53-99d5-5be1db776558") : configmap "client-ca" not found Mar 13 12:37:46.864993 master-0 kubenswrapper[7518]: I0313 12:37:46.864928 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:46.865321 master-0 kubenswrapper[7518]: E0313 12:37:46.865070 7518 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:46.865321 master-0 kubenswrapper[7518]: E0313 12:37:46.865105 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert podName:7ac704c6-e2a9-4a53-99d5-5be1db776558 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:47.865093474 +0000 UTC m=+22.498162731 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert") pod "controller-manager-988c89bfb-rl6tb" (UID: "7ac704c6-e2a9-4a53-99d5-5be1db776558") : secret "serving-cert" not found Mar 13 12:37:46.966821 master-0 kubenswrapper[7518]: I0313 12:37:46.966769 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwk62\" (UniqueName: \"kubernetes.io/projected/f31565e2-c211-4d28-8bbc-d7a951023a8b-kube-api-access-kwk62\") pod \"migrator-57ccdf9b5-7pcdp\" (UID: \"f31565e2-c211-4d28-8bbc-d7a951023a8b\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-7pcdp" Mar 13 12:37:47.039167 master-0 kubenswrapper[7518]: I0313 12:37:47.037959 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" event={"ID":"f0803181-4e37-43fa-8ddc-9c76d3f61817","Type":"ContainerStarted","Data":"876f570e7bca1677304688ecd8e1a442c714ddc31318f4b0812aca0943ba9d82"} Mar 13 12:37:47.043057 master-0 kubenswrapper[7518]: I0313 12:37:47.042044 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" event={"ID":"c0f3e81c-f61d-430a-98e8-82e3b283fc73","Type":"ContainerStarted","Data":"4db2bc5c40e8683ca741e5bf890d717d8c9fa9c48b7ac41671352e56a94462da"} Mar 13 12:37:47.071929 master-0 kubenswrapper[7518]: I0313 12:37:47.070920 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwk62\" (UniqueName: \"kubernetes.io/projected/f31565e2-c211-4d28-8bbc-d7a951023a8b-kube-api-access-kwk62\") pod \"migrator-57ccdf9b5-7pcdp\" (UID: \"f31565e2-c211-4d28-8bbc-d7a951023a8b\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-7pcdp" Mar 13 12:37:47.094343 master-0 kubenswrapper[7518]: I0313 12:37:47.094271 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" podStartSLOduration=2.09424809 podStartE2EDuration="2.09424809s" podCreationTimestamp="2026-03-13 12:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:37:47.093371688 +0000 UTC m=+21.726440875" watchObservedRunningTime="2026-03-13 12:37:47.09424809 +0000 UTC m=+21.727317277" Mar 13 12:37:47.109798 master-0 kubenswrapper[7518]: I0313 12:37:47.109743 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwk62\" (UniqueName: \"kubernetes.io/projected/f31565e2-c211-4d28-8bbc-d7a951023a8b-kube-api-access-kwk62\") pod \"migrator-57ccdf9b5-7pcdp\" (UID: \"f31565e2-c211-4d28-8bbc-d7a951023a8b\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-7pcdp" Mar 13 12:37:47.393455 master-0 kubenswrapper[7518]: I0313 12:37:47.393347 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-7pcdp" Mar 13 12:37:47.611363 master-0 kubenswrapper[7518]: I0313 12:37:47.611315 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="854220e0-eb13-42fe-a80e-b87c309f027a" path="/var/lib/kubelet/pods/854220e0-eb13-42fe-a80e-b87c309f027a/volumes" Mar 13 12:37:47.888345 master-0 kubenswrapper[7518]: I0313 12:37:47.888271 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:47.888345 master-0 kubenswrapper[7518]: I0313 12:37:47.888342 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:47.889394 master-0 kubenswrapper[7518]: E0313 12:37:47.888420 7518 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:47.889394 master-0 kubenswrapper[7518]: E0313 12:37:47.888490 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca podName:7ac704c6-e2a9-4a53-99d5-5be1db776558 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:49.888471725 +0000 UTC m=+24.521540912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca") pod "controller-manager-988c89bfb-rl6tb" (UID: "7ac704c6-e2a9-4a53-99d5-5be1db776558") : configmap "client-ca" not found Mar 13 12:37:47.889394 master-0 kubenswrapper[7518]: E0313 12:37:47.888532 7518 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:47.889394 master-0 kubenswrapper[7518]: E0313 12:37:47.888592 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert podName:7ac704c6-e2a9-4a53-99d5-5be1db776558 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:49.888574427 +0000 UTC m=+24.521643634 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert") pod "controller-manager-988c89bfb-rl6tb" (UID: "7ac704c6-e2a9-4a53-99d5-5be1db776558") : secret "serving-cert" not found Mar 13 12:37:48.048358 master-0 kubenswrapper[7518]: I0313 12:37:48.048300 7518 generic.go:334] "Generic (PLEG): container finished" podID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerID="876f570e7bca1677304688ecd8e1a442c714ddc31318f4b0812aca0943ba9d82" exitCode=0 Mar 13 12:37:48.048675 master-0 kubenswrapper[7518]: I0313 12:37:48.048613 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" event={"ID":"f0803181-4e37-43fa-8ddc-9c76d3f61817","Type":"ContainerDied","Data":"876f570e7bca1677304688ecd8e1a442c714ddc31318f4b0812aca0943ba9d82"} Mar 13 12:37:48.339192 master-0 kubenswrapper[7518]: I0313 12:37:48.334813 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-7pcdp"] Mar 13 12:37:48.344525 master-0 kubenswrapper[7518]: W0313 12:37:48.342908 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf31565e2_c211_4d28_8bbc_d7a951023a8b.slice/crio-1f2ba041c75397f172b0e8393f3ba52da66efb5011242b7893cceb36ffb01a0a WatchSource:0}: Error finding container 1f2ba041c75397f172b0e8393f3ba52da66efb5011242b7893cceb36ffb01a0a: Status 404 returned error can't find the container with id 1f2ba041c75397f172b0e8393f3ba52da66efb5011242b7893cceb36ffb01a0a Mar 13 12:37:49.053507 master-0 kubenswrapper[7518]: I0313 12:37:49.053116 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-7pcdp" event={"ID":"f31565e2-c211-4d28-8bbc-d7a951023a8b","Type":"ContainerStarted","Data":"1f2ba041c75397f172b0e8393f3ba52da66efb5011242b7893cceb36ffb01a0a"} Mar 13 12:37:49.055229 master-0 kubenswrapper[7518]: I0313 12:37:49.055188 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-qz6pg" event={"ID":"08e2bc8e-ca80-454c-81dc-211d122e32e0","Type":"ContainerStarted","Data":"73b7214eb3b149af407e0de425372761ad1727c33a83f6bee0f77472fbaba7fc"} Mar 13 12:37:49.908310 master-0 kubenswrapper[7518]: I0313 12:37:49.908246 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:49.908310 master-0 kubenswrapper[7518]: I0313 12:37:49.908303 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:49.908582 master-0 kubenswrapper[7518]: E0313 12:37:49.908432 7518 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:49.908582 master-0 kubenswrapper[7518]: E0313 12:37:49.908484 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca podName:7ac704c6-e2a9-4a53-99d5-5be1db776558 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:53.908470485 +0000 UTC m=+28.541539672 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca") pod "controller-manager-988c89bfb-rl6tb" (UID: "7ac704c6-e2a9-4a53-99d5-5be1db776558") : configmap "client-ca" not found Mar 13 12:37:49.908582 master-0 kubenswrapper[7518]: E0313 12:37:49.908550 7518 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:49.908678 master-0 kubenswrapper[7518]: E0313 12:37:49.908584 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert podName:7ac704c6-e2a9-4a53-99d5-5be1db776558 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:53.908572506 +0000 UTC m=+28.541641683 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert") pod "controller-manager-988c89bfb-rl6tb" (UID: "7ac704c6-e2a9-4a53-99d5-5be1db776558") : secret "serving-cert" not found Mar 13 12:37:50.110898 master-0 kubenswrapper[7518]: I0313 12:37:50.110814 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:50.111506 master-0 kubenswrapper[7518]: E0313 12:37:50.111043 7518 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:50.111506 master-0 kubenswrapper[7518]: E0313 12:37:50.111164 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert podName:75651bfd-ceaf-4bda-95a3-68ca11ec5abe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.111126022 +0000 UTC m=+32.744195279 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert") pod "route-controller-manager-54f4b89bbb-sb5x4" (UID: "75651bfd-ceaf-4bda-95a3-68ca11ec5abe") : secret "serving-cert" not found Mar 13 12:37:50.111506 master-0 kubenswrapper[7518]: I0313 12:37:50.111201 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:50.111506 master-0 kubenswrapper[7518]: E0313 12:37:50.111378 7518 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:50.111506 master-0 kubenswrapper[7518]: E0313 12:37:50.111447 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca podName:75651bfd-ceaf-4bda-95a3-68ca11ec5abe nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.111432657 +0000 UTC m=+32.744501844 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca") pod "route-controller-manager-54f4b89bbb-sb5x4" (UID: "75651bfd-ceaf-4bda-95a3-68ca11ec5abe") : configmap "client-ca" not found Mar 13 12:37:53.951285 master-0 kubenswrapper[7518]: I0313 12:37:53.951193 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:53.951285 master-0 kubenswrapper[7518]: I0313 12:37:53.951271 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:53.952023 master-0 kubenswrapper[7518]: E0313 12:37:53.951329 7518 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:53.952023 master-0 kubenswrapper[7518]: E0313 12:37:53.951465 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca podName:7ac704c6-e2a9-4a53-99d5-5be1db776558 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:01.951441437 +0000 UTC m=+36.584510674 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca") pod "controller-manager-988c89bfb-rl6tb" (UID: "7ac704c6-e2a9-4a53-99d5-5be1db776558") : configmap "client-ca" not found Mar 13 12:37:53.957487 master-0 kubenswrapper[7518]: I0313 12:37:53.957434 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:37:57.240980 master-0 kubenswrapper[7518]: I0313 12:37:57.232395 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj"] Mar 13 12:37:57.240980 master-0 kubenswrapper[7518]: I0313 12:37:57.233901 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.252226 master-0 kubenswrapper[7518]: I0313 12:37:57.248864 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 13 12:37:57.252226 master-0 kubenswrapper[7518]: I0313 12:37:57.250087 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 13 12:37:57.252226 master-0 kubenswrapper[7518]: I0313 12:37:57.251900 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 13 12:37:57.280161 master-0 kubenswrapper[7518]: I0313 12:37:57.277658 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj"] Mar 13 12:37:57.409370 master-0 kubenswrapper[7518]: I0313 12:37:57.408469 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/915aabfe-1071-4bfc-b291-424304dfe7d8-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.409370 master-0 kubenswrapper[7518]: I0313 12:37:57.408563 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/915aabfe-1071-4bfc-b291-424304dfe7d8-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.409370 master-0 kubenswrapper[7518]: I0313 12:37:57.408642 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/915aabfe-1071-4bfc-b291-424304dfe7d8-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.409370 master-0 kubenswrapper[7518]: I0313 12:37:57.408821 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/915aabfe-1071-4bfc-b291-424304dfe7d8-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.409370 master-0 kubenswrapper[7518]: I0313 12:37:57.408897 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n85n6\" (UniqueName: \"kubernetes.io/projected/915aabfe-1071-4bfc-b291-424304dfe7d8-kube-api-access-n85n6\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.409370 master-0 kubenswrapper[7518]: I0313 12:37:57.409170 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg"] Mar 13 12:37:57.409839 master-0 kubenswrapper[7518]: I0313 12:37:57.409766 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 12:37:57.411315 master-0 kubenswrapper[7518]: I0313 12:37:57.409928 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.411315 master-0 kubenswrapper[7518]: I0313 12:37:57.410592 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:37:57.414731 master-0 kubenswrapper[7518]: I0313 12:37:57.414430 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 13 12:37:57.416990 master-0 kubenswrapper[7518]: I0313 12:37:57.415592 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 13 12:37:57.416990 master-0 kubenswrapper[7518]: I0313 12:37:57.415855 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 12:37:57.416990 master-0 kubenswrapper[7518]: I0313 12:37:57.416065 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 13 12:37:57.425587 master-0 kubenswrapper[7518]: I0313 12:37:57.422925 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 13 12:37:57.459782 master-0 kubenswrapper[7518]: I0313 12:37:57.459666 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 12:37:57.464847 master-0 kubenswrapper[7518]: I0313 12:37:57.461331 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg"] Mar 13 12:37:57.569474 master-0 kubenswrapper[7518]: I0313 12:37:57.569410 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/00ebdf06-1f44-40cd-87e5-54195188b6d4-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.569675 master-0 kubenswrapper[7518]: I0313 12:37:57.569567 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/915aabfe-1071-4bfc-b291-424304dfe7d8-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.569675 master-0 kubenswrapper[7518]: I0313 12:37:57.569598 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rkc4\" (UniqueName: \"kubernetes.io/projected/00ebdf06-1f44-40cd-87e5-54195188b6d4-kube-api-access-7rkc4\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.569675 master-0 kubenswrapper[7518]: I0313 12:37:57.569620 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f951e49f-91f7-42d3-bc63-8117cff68d7a-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"f951e49f-91f7-42d3-bc63-8117cff68d7a\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:37:57.569675 master-0 kubenswrapper[7518]: I0313 12:37:57.569641 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n85n6\" (UniqueName: \"kubernetes.io/projected/915aabfe-1071-4bfc-b291-424304dfe7d8-kube-api-access-n85n6\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.569675 master-0 kubenswrapper[7518]: I0313 12:37:57.569656 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/00ebdf06-1f44-40cd-87e5-54195188b6d4-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.569816 master-0 kubenswrapper[7518]: I0313 12:37:57.569681 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f951e49f-91f7-42d3-bc63-8117cff68d7a-kube-api-access\") pod \"installer-1-master-0\" (UID: \"f951e49f-91f7-42d3-bc63-8117cff68d7a\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:37:57.569816 master-0 kubenswrapper[7518]: I0313 12:37:57.569705 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/00ebdf06-1f44-40cd-87e5-54195188b6d4-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.569816 master-0 kubenswrapper[7518]: I0313 12:37:57.569733 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/00ebdf06-1f44-40cd-87e5-54195188b6d4-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.569816 master-0 kubenswrapper[7518]: I0313 12:37:57.569755 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f951e49f-91f7-42d3-bc63-8117cff68d7a-var-lock\") pod \"installer-1-master-0\" (UID: \"f951e49f-91f7-42d3-bc63-8117cff68d7a\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:37:57.569816 master-0 kubenswrapper[7518]: I0313 12:37:57.569788 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/915aabfe-1071-4bfc-b291-424304dfe7d8-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.569816 master-0 kubenswrapper[7518]: I0313 12:37:57.569802 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/00ebdf06-1f44-40cd-87e5-54195188b6d4-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.569978 master-0 kubenswrapper[7518]: I0313 12:37:57.569835 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/915aabfe-1071-4bfc-b291-424304dfe7d8-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.569978 master-0 kubenswrapper[7518]: I0313 12:37:57.569876 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/915aabfe-1071-4bfc-b291-424304dfe7d8-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.570116 master-0 kubenswrapper[7518]: I0313 12:37:57.570070 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/915aabfe-1071-4bfc-b291-424304dfe7d8-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.570206 master-0 kubenswrapper[7518]: I0313 12:37:57.570164 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/915aabfe-1071-4bfc-b291-424304dfe7d8-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.573279 master-0 kubenswrapper[7518]: I0313 12:37:57.573254 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/915aabfe-1071-4bfc-b291-424304dfe7d8-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.585326 master-0 kubenswrapper[7518]: I0313 12:37:57.585193 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/915aabfe-1071-4bfc-b291-424304dfe7d8-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.680721 master-0 kubenswrapper[7518]: I0313 12:37:57.680682 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f951e49f-91f7-42d3-bc63-8117cff68d7a-var-lock\") pod \"installer-1-master-0\" (UID: \"f951e49f-91f7-42d3-bc63-8117cff68d7a\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:37:57.681031 master-0 kubenswrapper[7518]: I0313 12:37:57.681008 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/00ebdf06-1f44-40cd-87e5-54195188b6d4-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.681219 master-0 kubenswrapper[7518]: I0313 12:37:57.681195 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/00ebdf06-1f44-40cd-87e5-54195188b6d4-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.681444 master-0 kubenswrapper[7518]: I0313 12:37:57.681425 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rkc4\" (UniqueName: \"kubernetes.io/projected/00ebdf06-1f44-40cd-87e5-54195188b6d4-kube-api-access-7rkc4\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.681603 master-0 kubenswrapper[7518]: I0313 12:37:57.681585 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f951e49f-91f7-42d3-bc63-8117cff68d7a-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"f951e49f-91f7-42d3-bc63-8117cff68d7a\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:37:57.681704 master-0 kubenswrapper[7518]: I0313 12:37:57.681690 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/00ebdf06-1f44-40cd-87e5-54195188b6d4-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.681805 master-0 kubenswrapper[7518]: I0313 12:37:57.681782 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f951e49f-91f7-42d3-bc63-8117cff68d7a-kube-api-access\") pod \"installer-1-master-0\" (UID: \"f951e49f-91f7-42d3-bc63-8117cff68d7a\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:37:57.681900 master-0 kubenswrapper[7518]: I0313 12:37:57.681887 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/00ebdf06-1f44-40cd-87e5-54195188b6d4-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.682025 master-0 kubenswrapper[7518]: I0313 12:37:57.682011 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/00ebdf06-1f44-40cd-87e5-54195188b6d4-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.682209 master-0 kubenswrapper[7518]: I0313 12:37:57.682182 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/00ebdf06-1f44-40cd-87e5-54195188b6d4-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.682345 master-0 kubenswrapper[7518]: I0313 12:37:57.682328 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f951e49f-91f7-42d3-bc63-8117cff68d7a-var-lock\") pod \"installer-1-master-0\" (UID: \"f951e49f-91f7-42d3-bc63-8117cff68d7a\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:37:57.690598 master-0 kubenswrapper[7518]: I0313 12:37:57.684261 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f951e49f-91f7-42d3-bc63-8117cff68d7a-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"f951e49f-91f7-42d3-bc63-8117cff68d7a\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:37:57.690598 master-0 kubenswrapper[7518]: I0313 12:37:57.685623 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/00ebdf06-1f44-40cd-87e5-54195188b6d4-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.690598 master-0 kubenswrapper[7518]: I0313 12:37:57.685873 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/00ebdf06-1f44-40cd-87e5-54195188b6d4-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.690598 master-0 kubenswrapper[7518]: I0313 12:37:57.689705 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/00ebdf06-1f44-40cd-87e5-54195188b6d4-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.691387 master-0 kubenswrapper[7518]: I0313 12:37:57.691365 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/00ebdf06-1f44-40cd-87e5-54195188b6d4-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:57.708511 master-0 kubenswrapper[7518]: I0313 12:37:57.708478 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n85n6\" (UniqueName: \"kubernetes.io/projected/915aabfe-1071-4bfc-b291-424304dfe7d8-kube-api-access-n85n6\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.991509 master-0 kubenswrapper[7518]: I0313 12:37:57.991382 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:37:57.999819 master-0 kubenswrapper[7518]: I0313 12:37:57.999772 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rkc4\" (UniqueName: \"kubernetes.io/projected/00ebdf06-1f44-40cd-87e5-54195188b6d4-kube-api-access-7rkc4\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:58.050311 master-0 kubenswrapper[7518]: I0313 12:37:58.042337 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f951e49f-91f7-42d3-bc63-8117cff68d7a-kube-api-access\") pod \"installer-1-master-0\" (UID: \"f951e49f-91f7-42d3-bc63-8117cff68d7a\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:37:58.074267 master-0 kubenswrapper[7518]: I0313 12:37:58.071382 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:37:58.260235 master-0 kubenswrapper[7518]: I0313 12:37:58.249896 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:37:58.260235 master-0 kubenswrapper[7518]: I0313 12:37:58.255263 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:58.260235 master-0 kubenswrapper[7518]: I0313 12:37:58.255355 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca\") pod \"route-controller-manager-54f4b89bbb-sb5x4\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:37:58.260235 master-0 kubenswrapper[7518]: E0313 12:37:58.255503 7518 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:37:58.260235 master-0 kubenswrapper[7518]: E0313 12:37:58.255577 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca podName:75651bfd-ceaf-4bda-95a3-68ca11ec5abe nodeName:}" failed. No retries permitted until 2026-03-13 12:38:14.255546093 +0000 UTC m=+48.888615280 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca") pod "route-controller-manager-54f4b89bbb-sb5x4" (UID: "75651bfd-ceaf-4bda-95a3-68ca11ec5abe") : configmap "client-ca" not found Mar 13 12:37:58.260235 master-0 kubenswrapper[7518]: E0313 12:37:58.255647 7518 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:37:58.260235 master-0 kubenswrapper[7518]: E0313 12:37:58.255667 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert podName:75651bfd-ceaf-4bda-95a3-68ca11ec5abe nodeName:}" failed. No retries permitted until 2026-03-13 12:38:14.255661385 +0000 UTC m=+48.888730572 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert") pod "route-controller-manager-54f4b89bbb-sb5x4" (UID: "75651bfd-ceaf-4bda-95a3-68ca11ec5abe") : secret "serving-cert" not found Mar 13 12:37:58.260235 master-0 kubenswrapper[7518]: I0313 12:37:58.257339 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-7cbf874688-d4wjw"] Mar 13 12:37:58.260235 master-0 kubenswrapper[7518]: I0313 12:37:58.257927 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" event={"ID":"f0803181-4e37-43fa-8ddc-9c76d3f61817","Type":"ContainerStarted","Data":"6b3b1b1d996a5cfa81d2f82133cbb61df8d0101269e29c3d8745b628b44289f9"} Mar 13 12:37:58.260235 master-0 kubenswrapper[7518]: I0313 12:37:58.258674 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-7pcdp" event={"ID":"f31565e2-c211-4d28-8bbc-d7a951023a8b","Type":"ContainerStarted","Data":"fe833bd4669397a8e6c3abe499bef17665354308187688cb5dffba6b8ae25597"} Mar 13 12:37:58.260235 master-0 kubenswrapper[7518]: I0313 12:37:58.258724 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-7pcdp" event={"ID":"f31565e2-c211-4d28-8bbc-d7a951023a8b","Type":"ContainerStarted","Data":"638f4d44cbed23892d2fd93a5f69d4f1ae5c078a3571d169c09c2d7976d88a12"} Mar 13 12:37:58.260235 master-0 kubenswrapper[7518]: I0313 12:37:58.259419 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.261092 master-0 kubenswrapper[7518]: I0313 12:37:58.261081 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:37:58.265112 master-0 kubenswrapper[7518]: I0313 12:37:58.262328 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 12:37:58.265112 master-0 kubenswrapper[7518]: I0313 12:37:58.262637 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 12:37:58.265112 master-0 kubenswrapper[7518]: I0313 12:37:58.263292 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 13 12:37:58.265112 master-0 kubenswrapper[7518]: I0313 12:37:58.263433 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 12:37:58.265112 master-0 kubenswrapper[7518]: I0313 12:37:58.263530 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 12:37:58.265112 master-0 kubenswrapper[7518]: I0313 12:37:58.263769 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 12:37:58.265112 master-0 kubenswrapper[7518]: I0313 12:37:58.263882 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 13 12:37:58.265112 master-0 kubenswrapper[7518]: I0313 12:37:58.264001 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 12:37:58.265112 master-0 kubenswrapper[7518]: I0313 12:37:58.264010 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 12:37:58.265112 master-0 kubenswrapper[7518]: I0313 12:37:58.264089 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 12:37:58.318199 master-0 kubenswrapper[7518]: I0313 12:37:58.308030 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7cbf874688-d4wjw"] Mar 13 12:37:58.329413 master-0 kubenswrapper[7518]: I0313 12:37:58.321957 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-7pcdp" podStartSLOduration=3.526392802 podStartE2EDuration="12.321924452s" podCreationTimestamp="2026-03-13 12:37:46 +0000 UTC" firstStartedPulling="2026-03-13 12:37:48.34481688 +0000 UTC m=+22.977886067" lastFinishedPulling="2026-03-13 12:37:57.14034853 +0000 UTC m=+31.773417717" observedRunningTime="2026-03-13 12:37:58.321553077 +0000 UTC m=+32.954622264" watchObservedRunningTime="2026-03-13 12:37:58.321924452 +0000 UTC m=+32.954993639" Mar 13 12:37:58.360159 master-0 kubenswrapper[7518]: I0313 12:37:58.359723 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-serving-cert\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.360159 master-0 kubenswrapper[7518]: I0313 12:37:58.359881 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-config\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.360159 master-0 kubenswrapper[7518]: I0313 12:37:58.359913 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-audit\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.360159 master-0 kubenswrapper[7518]: I0313 12:37:58.359941 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-etcd-client\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.360159 master-0 kubenswrapper[7518]: I0313 12:37:58.359987 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-image-import-ca\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.360159 master-0 kubenswrapper[7518]: I0313 12:37:58.360062 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06ec7805-846f-4256-894e-7638c7ad85a3-audit-dir\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.360159 master-0 kubenswrapper[7518]: I0313 12:37:58.360089 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qsbn\" (UniqueName: \"kubernetes.io/projected/06ec7805-846f-4256-894e-7638c7ad85a3-kube-api-access-9qsbn\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.360159 master-0 kubenswrapper[7518]: I0313 12:37:58.360147 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-encryption-config\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.360651 master-0 kubenswrapper[7518]: I0313 12:37:58.360181 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06ec7805-846f-4256-894e-7638c7ad85a3-node-pullsecrets\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.360651 master-0 kubenswrapper[7518]: I0313 12:37:58.360263 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-etcd-serving-ca\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.360651 master-0 kubenswrapper[7518]: I0313 12:37:58.360304 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-trusted-ca-bundle\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.462879 master-0 kubenswrapper[7518]: I0313 12:37:58.462803 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-trusted-ca-bundle\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.462879 master-0 kubenswrapper[7518]: I0313 12:37:58.462890 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-serving-cert\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.463169 master-0 kubenswrapper[7518]: I0313 12:37:58.462953 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-config\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.463169 master-0 kubenswrapper[7518]: I0313 12:37:58.462970 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-audit\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.463169 master-0 kubenswrapper[7518]: I0313 12:37:58.462984 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-etcd-client\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.463169 master-0 kubenswrapper[7518]: I0313 12:37:58.463009 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-image-import-ca\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.463169 master-0 kubenswrapper[7518]: I0313 12:37:58.463027 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:58.463169 master-0 kubenswrapper[7518]: I0313 12:37:58.463045 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:37:58.463169 master-0 kubenswrapper[7518]: I0313 12:37:58.463061 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06ec7805-846f-4256-894e-7638c7ad85a3-audit-dir\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.463169 master-0 kubenswrapper[7518]: I0313 12:37:58.463077 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qsbn\" (UniqueName: \"kubernetes.io/projected/06ec7805-846f-4256-894e-7638c7ad85a3-kube-api-access-9qsbn\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.463169 master-0 kubenswrapper[7518]: I0313 12:37:58.463093 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-encryption-config\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.463169 master-0 kubenswrapper[7518]: I0313 12:37:58.463114 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06ec7805-846f-4256-894e-7638c7ad85a3-node-pullsecrets\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.463169 master-0 kubenswrapper[7518]: I0313 12:37:58.463160 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:58.463627 master-0 kubenswrapper[7518]: I0313 12:37:58.463196 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-etcd-serving-ca\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.463974 master-0 kubenswrapper[7518]: I0313 12:37:58.463939 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-etcd-serving-ca\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.464300 master-0 kubenswrapper[7518]: I0313 12:37:58.464269 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-trusted-ca-bundle\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.464369 master-0 kubenswrapper[7518]: E0313 12:37:58.464344 7518 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:37:58.464420 master-0 kubenswrapper[7518]: E0313 12:37:58.464382 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-serving-cert podName:06ec7805-846f-4256-894e-7638c7ad85a3 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.964371069 +0000 UTC m=+33.597440256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-serving-cert") pod "apiserver-7cbf874688-d4wjw" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3") : secret "serving-cert" not found Mar 13 12:37:58.465024 master-0 kubenswrapper[7518]: I0313 12:37:58.464986 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-config\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.465096 master-0 kubenswrapper[7518]: E0313 12:37:58.465038 7518 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 12:37:58.465096 master-0 kubenswrapper[7518]: E0313 12:37:58.465061 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-audit podName:06ec7805-846f-4256-894e-7638c7ad85a3 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.965054019 +0000 UTC m=+33.598123206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-audit") pod "apiserver-7cbf874688-d4wjw" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3") : configmap "audit-0" not found Mar 13 12:37:58.469474 master-0 kubenswrapper[7518]: I0313 12:37:58.468956 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-image-import-ca\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.472440 master-0 kubenswrapper[7518]: I0313 12:37:58.472395 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-etcd-client\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.474384 master-0 kubenswrapper[7518]: E0313 12:37:58.474348 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:37:58.474489 master-0 kubenswrapper[7518]: E0313 12:37:58.474434 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert podName:10944f9c-8ce9-44e6-9c36-a0ea19d8cae3 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:30.474411742 +0000 UTC m=+65.107480929 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert") pod "catalog-operator-7d9c49f57b-tlnkd" (UID: "10944f9c-8ce9-44e6-9c36-a0ea19d8cae3") : secret "catalog-operator-serving-cert" not found Mar 13 12:37:58.474547 master-0 kubenswrapper[7518]: I0313 12:37:58.474505 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06ec7805-846f-4256-894e-7638c7ad85a3-audit-dir\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.474593 master-0 kubenswrapper[7518]: I0313 12:37:58.474583 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06ec7805-846f-4256-894e-7638c7ad85a3-node-pullsecrets\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.478742 master-0 kubenswrapper[7518]: I0313 12:37:58.478691 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-encryption-config\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.478742 master-0 kubenswrapper[7518]: I0313 12:37:58.478694 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:58.479743 master-0 kubenswrapper[7518]: I0313 12:37:58.479695 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"cluster-version-operator-745944c6b7-mbjxt\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:58.564648 master-0 kubenswrapper[7518]: I0313 12:37:58.564597 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:58.565586 master-0 kubenswrapper[7518]: I0313 12:37:58.565542 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:58.565586 master-0 kubenswrapper[7518]: I0313 12:37:58.565573 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:58.565691 master-0 kubenswrapper[7518]: I0313 12:37:58.565598 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:37:58.565691 master-0 kubenswrapper[7518]: I0313 12:37:58.565621 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:37:58.565691 master-0 kubenswrapper[7518]: I0313 12:37:58.565654 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:37:58.566062 master-0 kubenswrapper[7518]: I0313 12:37:58.566008 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:37:58.566062 master-0 kubenswrapper[7518]: I0313 12:37:58.566040 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:58.566062 master-0 kubenswrapper[7518]: I0313 12:37:58.566061 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:58.566340 master-0 kubenswrapper[7518]: I0313 12:37:58.566087 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:58.566547 master-0 kubenswrapper[7518]: E0313 12:37:58.566497 7518 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:58.566601 master-0 kubenswrapper[7518]: E0313 12:37:58.566587 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls podName:604456a0-4997-43bc-87ef-283a002111fe nodeName:}" failed. No retries permitted until 2026-03-13 12:38:30.5665686 +0000 UTC m=+65.199637787 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-zwtdz" (UID: "604456a0-4997-43bc-87ef-283a002111fe") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:37:58.567602 master-0 kubenswrapper[7518]: E0313 12:37:58.567573 7518 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:37:58.567662 master-0 kubenswrapper[7518]: E0313 12:37:58.567622 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs podName:29b6aa89-0416-4595-9deb-10b290521d86 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:30.567609555 +0000 UTC m=+65.200678792 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs") pod "network-metrics-daemon-r9lmb" (UID: "29b6aa89-0416-4595-9deb-10b290521d86") : secret "metrics-daemon-secret" not found Mar 13 12:37:58.567700 master-0 kubenswrapper[7518]: E0313 12:37:58.567683 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:37:58.567729 master-0 kubenswrapper[7518]: E0313 12:37:58.567712 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert podName:3d653e1a-5903-4a02-9357-df145f028c0d nodeName:}" failed. No retries permitted until 2026-03-13 12:38:30.567702956 +0000 UTC m=+65.200772143 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-669qk" (UID: "3d653e1a-5903-4a02-9357-df145f028c0d") : secret "package-server-manager-serving-cert" not found Mar 13 12:37:58.567770 master-0 kubenswrapper[7518]: E0313 12:37:58.567764 7518 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:37:58.567799 master-0 kubenswrapper[7518]: E0313 12:37:58.567789 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert podName:d5a19b80-d488-46d3-a4a8-0b80361077e1 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:30.567780927 +0000 UTC m=+65.200850194 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert") pod "olm-operator-d64cfc9db-rfqb9" (UID: "d5a19b80-d488-46d3-a4a8-0b80361077e1") : secret "olm-operator-serving-cert" not found Mar 13 12:37:58.568533 master-0 kubenswrapper[7518]: I0313 12:37:58.568504 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:58.571017 master-0 kubenswrapper[7518]: I0313 12:37:58.570976 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:58.571299 master-0 kubenswrapper[7518]: I0313 12:37:58.571270 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") pod \"multus-admission-controller-8d675b596-96gds\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:58.574207 master-0 kubenswrapper[7518]: I0313 12:37:58.572262 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:58.574207 master-0 kubenswrapper[7518]: I0313 12:37:58.573441 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:58.575695 master-0 kubenswrapper[7518]: I0313 12:37:58.575659 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:58.630688 master-0 kubenswrapper[7518]: I0313 12:37:58.630605 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qsbn\" (UniqueName: \"kubernetes.io/projected/06ec7805-846f-4256-894e-7638c7ad85a3-kube-api-access-9qsbn\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:58.720936 master-0 kubenswrapper[7518]: I0313 12:37:58.719457 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:37:58.720936 master-0 kubenswrapper[7518]: I0313 12:37:58.720372 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:37:58.720936 master-0 kubenswrapper[7518]: I0313 12:37:58.720824 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:37:58.721269 master-0 kubenswrapper[7518]: I0313 12:37:58.721058 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:37:58.721318 master-0 kubenswrapper[7518]: I0313 12:37:58.721287 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:37:58.723325 master-0 kubenswrapper[7518]: I0313 12:37:58.721508 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:37:58.723325 master-0 kubenswrapper[7518]: I0313 12:37:58.722392 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:37:58.839325 master-0 kubenswrapper[7518]: I0313 12:37:58.834979 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 12:37:58.845063 master-0 kubenswrapper[7518]: I0313 12:37:58.844380 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg"] Mar 13 12:37:58.856487 master-0 kubenswrapper[7518]: W0313 12:37:58.856446 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00ebdf06_1f44_40cd_87e5_54195188b6d4.slice/crio-0b83ebe9d6eac21a54c3830c4cd62ad02d28ed6f976f2ea34a3538e434b5beb0 WatchSource:0}: Error finding container 0b83ebe9d6eac21a54c3830c4cd62ad02d28ed6f976f2ea34a3538e434b5beb0: Status 404 returned error can't find the container with id 0b83ebe9d6eac21a54c3830c4cd62ad02d28ed6f976f2ea34a3538e434b5beb0 Mar 13 12:37:58.880009 master-0 kubenswrapper[7518]: I0313 12:37:58.878815 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj"] Mar 13 12:37:59.069588 master-0 kubenswrapper[7518]: I0313 12:37:59.067169 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-serving-cert\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:59.069588 master-0 kubenswrapper[7518]: I0313 12:37:59.067254 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-audit\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:37:59.069588 master-0 kubenswrapper[7518]: E0313 12:37:59.067438 7518 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 12:37:59.069588 master-0 kubenswrapper[7518]: E0313 12:37:59.067490 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-audit podName:06ec7805-846f-4256-894e-7638c7ad85a3 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:00.067474252 +0000 UTC m=+34.700543439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-audit") pod "apiserver-7cbf874688-d4wjw" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3") : configmap "audit-0" not found Mar 13 12:37:59.069588 master-0 kubenswrapper[7518]: E0313 12:37:59.067559 7518 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:37:59.069588 master-0 kubenswrapper[7518]: E0313 12:37:59.067588 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-serving-cert podName:06ec7805-846f-4256-894e-7638c7ad85a3 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:00.067579613 +0000 UTC m=+34.700648800 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-serving-cert") pod "apiserver-7cbf874688-d4wjw" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3") : secret "serving-cert" not found Mar 13 12:37:59.133205 master-0 kubenswrapper[7518]: I0313 12:37:59.133128 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc"] Mar 13 12:37:59.139200 master-0 kubenswrapper[7518]: I0313 12:37:59.138601 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-ckl2j"] Mar 13 12:37:59.168266 master-0 kubenswrapper[7518]: W0313 12:37:59.162283 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3020d236_03e0_4916_97dd_f1085632ca43.slice/crio-c7cc0f12daf98f8c149d5ab9799aa0a44614ca17d39dc2c0de31acb11cb8513a WatchSource:0}: Error finding container c7cc0f12daf98f8c149d5ab9799aa0a44614ca17d39dc2c0de31acb11cb8513a: Status 404 returned error can't find the container with id c7cc0f12daf98f8c149d5ab9799aa0a44614ca17d39dc2c0de31acb11cb8513a Mar 13 12:37:59.175601 master-0 kubenswrapper[7518]: W0313 12:37:59.170382 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f79578c_bbfb_4968_893a_730deb4c01f9.slice/crio-e536f971c5136f6b4bf02b1c06e15888a2ce0d84bff74c72b773c7dfe08129dc WatchSource:0}: Error finding container e536f971c5136f6b4bf02b1c06e15888a2ce0d84bff74c72b773c7dfe08129dc: Status 404 returned error can't find the container with id e536f971c5136f6b4bf02b1c06e15888a2ce0d84bff74c72b773c7dfe08129dc Mar 13 12:37:59.261374 master-0 kubenswrapper[7518]: I0313 12:37:59.260620 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n"] Mar 13 12:37:59.272220 master-0 kubenswrapper[7518]: I0313 12:37:59.271838 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-mmwk7"] Mar 13 12:37:59.338885 master-0 kubenswrapper[7518]: I0313 12:37:59.293525 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-t8fb4_f0803181-4e37-43fa-8ddc-9c76d3f61817/openshift-config-operator/0.log" Mar 13 12:37:59.338885 master-0 kubenswrapper[7518]: I0313 12:37:59.294053 7518 generic.go:334] "Generic (PLEG): container finished" podID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerID="6b3b1b1d996a5cfa81d2f82133cbb61df8d0101269e29c3d8745b628b44289f9" exitCode=255 Mar 13 12:37:59.338885 master-0 kubenswrapper[7518]: I0313 12:37:59.294159 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" event={"ID":"f0803181-4e37-43fa-8ddc-9c76d3f61817","Type":"ContainerDied","Data":"6b3b1b1d996a5cfa81d2f82133cbb61df8d0101269e29c3d8745b628b44289f9"} Mar 13 12:37:59.338885 master-0 kubenswrapper[7518]: I0313 12:37:59.335464 7518 scope.go:117] "RemoveContainer" containerID="6b3b1b1d996a5cfa81d2f82133cbb61df8d0101269e29c3d8745b628b44289f9" Mar 13 12:37:59.338885 master-0 kubenswrapper[7518]: I0313 12:37:59.338477 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"f951e49f-91f7-42d3-bc63-8117cff68d7a","Type":"ContainerStarted","Data":"5fc697237b7f9115f1d02bf0edd32bcf0859ce8f5d08322c8dce418ded8e8e34"} Mar 13 12:37:59.347436 master-0 kubenswrapper[7518]: W0313 12:37:59.347064 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13f32761_b386_4f93_b3c0_b16ea53d338a.slice/crio-e7d5143ee528d1b1b82a3ddf6b2e4a81cfc844b962f0b1dce63b2e1946f0f7b1 WatchSource:0}: Error finding container e7d5143ee528d1b1b82a3ddf6b2e4a81cfc844b962f0b1dce63b2e1946f0f7b1: Status 404 returned error can't find the container with id e7d5143ee528d1b1b82a3ddf6b2e4a81cfc844b962f0b1dce63b2e1946f0f7b1 Mar 13 12:37:59.349647 master-0 kubenswrapper[7518]: I0313 12:37:59.349482 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" event={"ID":"f39d7f76-0075-44c3-9101-eb2607cb176a","Type":"ContainerStarted","Data":"230dca71af5081e9c7ea82712c0cc643f5676c9c599e1fa82c984048ce54a082"} Mar 13 12:37:59.351343 master-0 kubenswrapper[7518]: I0313 12:37:59.351279 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4"] Mar 13 12:37:59.352485 master-0 kubenswrapper[7518]: I0313 12:37:59.352316 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" event={"ID":"00ebdf06-1f44-40cd-87e5-54195188b6d4","Type":"ContainerStarted","Data":"0b83ebe9d6eac21a54c3830c4cd62ad02d28ed6f976f2ea34a3538e434b5beb0"} Mar 13 12:37:59.358606 master-0 kubenswrapper[7518]: I0313 12:37:59.358558 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" event={"ID":"915aabfe-1071-4bfc-b291-424304dfe7d8","Type":"ContainerStarted","Data":"c2b846fb7ae8217762a980bc271d109131601f29417428a6bf3bd52ed70a5227"} Mar 13 12:37:59.360034 master-0 kubenswrapper[7518]: I0313 12:37:59.360001 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" event={"ID":"3020d236-03e0-4916-97dd-f1085632ca43","Type":"ContainerStarted","Data":"c7cc0f12daf98f8c149d5ab9799aa0a44614ca17d39dc2c0de31acb11cb8513a"} Mar 13 12:37:59.366624 master-0 kubenswrapper[7518]: I0313 12:37:59.366586 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" event={"ID":"2f79578c-bbfb-4968-893a-730deb4c01f9","Type":"ContainerStarted","Data":"e536f971c5136f6b4bf02b1c06e15888a2ce0d84bff74c72b773c7dfe08129dc"} Mar 13 12:37:59.435959 master-0 kubenswrapper[7518]: I0313 12:37:59.435906 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-96gds"] Mar 13 12:37:59.633563 master-0 kubenswrapper[7518]: I0313 12:37:59.632974 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h"] Mar 13 12:37:59.634608 master-0 kubenswrapper[7518]: I0313 12:37:59.634587 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.653051 master-0 kubenswrapper[7518]: I0313 12:37:59.640573 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 12:37:59.653051 master-0 kubenswrapper[7518]: I0313 12:37:59.640628 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 12:37:59.653051 master-0 kubenswrapper[7518]: I0313 12:37:59.640802 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 12:37:59.653051 master-0 kubenswrapper[7518]: I0313 12:37:59.642096 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 12:37:59.653051 master-0 kubenswrapper[7518]: I0313 12:37:59.642278 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h"] Mar 13 12:37:59.653051 master-0 kubenswrapper[7518]: I0313 12:37:59.642453 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 12:37:59.653051 master-0 kubenswrapper[7518]: I0313 12:37:59.642580 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 12:37:59.653051 master-0 kubenswrapper[7518]: I0313 12:37:59.642697 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 12:37:59.653051 master-0 kubenswrapper[7518]: I0313 12:37:59.643048 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 12:37:59.824311 master-0 kubenswrapper[7518]: I0313 12:37:59.824126 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmbjg\" (UniqueName: \"kubernetes.io/projected/7e365323-a8e3-4102-8819-23a135e158c7-kube-api-access-xmbjg\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.824311 master-0 kubenswrapper[7518]: I0313 12:37:59.824194 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-etcd-client\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.824311 master-0 kubenswrapper[7518]: I0313 12:37:59.824254 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e365323-a8e3-4102-8819-23a135e158c7-audit-dir\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.824311 master-0 kubenswrapper[7518]: I0313 12:37:59.824289 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-serving-cert\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.824623 master-0 kubenswrapper[7518]: I0313 12:37:59.824340 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-encryption-config\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.824623 master-0 kubenswrapper[7518]: I0313 12:37:59.824378 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-trusted-ca-bundle\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.824623 master-0 kubenswrapper[7518]: I0313 12:37:59.824402 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-etcd-serving-ca\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.824623 master-0 kubenswrapper[7518]: I0313 12:37:59.824425 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-audit-policies\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.925776 master-0 kubenswrapper[7518]: I0313 12:37:59.925683 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmbjg\" (UniqueName: \"kubernetes.io/projected/7e365323-a8e3-4102-8819-23a135e158c7-kube-api-access-xmbjg\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.925776 master-0 kubenswrapper[7518]: I0313 12:37:59.925735 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-etcd-client\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.927121 master-0 kubenswrapper[7518]: I0313 12:37:59.926069 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e365323-a8e3-4102-8819-23a135e158c7-audit-dir\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.927121 master-0 kubenswrapper[7518]: I0313 12:37:59.926116 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-serving-cert\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.927121 master-0 kubenswrapper[7518]: I0313 12:37:59.926173 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-encryption-config\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.927121 master-0 kubenswrapper[7518]: I0313 12:37:59.926199 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-trusted-ca-bundle\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.927121 master-0 kubenswrapper[7518]: I0313 12:37:59.926214 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-etcd-serving-ca\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.927121 master-0 kubenswrapper[7518]: I0313 12:37:59.926239 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-audit-policies\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.927765 master-0 kubenswrapper[7518]: I0313 12:37:59.927171 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-trusted-ca-bundle\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.927765 master-0 kubenswrapper[7518]: I0313 12:37:59.927390 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e365323-a8e3-4102-8819-23a135e158c7-audit-dir\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.928259 master-0 kubenswrapper[7518]: I0313 12:37:59.928209 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-audit-policies\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.928884 master-0 kubenswrapper[7518]: I0313 12:37:59.928835 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-etcd-serving-ca\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.934450 master-0 kubenswrapper[7518]: I0313 12:37:59.934407 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-etcd-client\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.935322 master-0 kubenswrapper[7518]: I0313 12:37:59.935274 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-encryption-config\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.938805 master-0 kubenswrapper[7518]: I0313 12:37:59.938781 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-serving-cert\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:37:59.948499 master-0 kubenswrapper[7518]: I0313 12:37:59.948460 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmbjg\" (UniqueName: \"kubernetes.io/projected/7e365323-a8e3-4102-8819-23a135e158c7-kube-api-access-xmbjg\") pod \"apiserver-6b7d89b46f-h7w7h\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:38:00.044693 master-0 kubenswrapper[7518]: I0313 12:38:00.044643 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:38:00.129154 master-0 kubenswrapper[7518]: I0313 12:38:00.129065 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-serving-cert\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:38:00.129367 master-0 kubenswrapper[7518]: I0313 12:38:00.129190 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-audit\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:38:00.129417 master-0 kubenswrapper[7518]: E0313 12:38:00.129354 7518 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:38:00.129492 master-0 kubenswrapper[7518]: E0313 12:38:00.129448 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-serving-cert podName:06ec7805-846f-4256-894e-7638c7ad85a3 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:02.129422694 +0000 UTC m=+36.762491881 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-serving-cert") pod "apiserver-7cbf874688-d4wjw" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3") : secret "serving-cert" not found Mar 13 12:38:00.129570 master-0 kubenswrapper[7518]: E0313 12:38:00.129536 7518 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 12:38:00.129620 master-0 kubenswrapper[7518]: E0313 12:38:00.129608 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-audit podName:06ec7805-846f-4256-894e-7638c7ad85a3 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:02.129590387 +0000 UTC m=+36.762659654 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-audit") pod "apiserver-7cbf874688-d4wjw" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3") : configmap "audit-0" not found Mar 13 12:38:00.346797 master-0 kubenswrapper[7518]: I0313 12:38:00.346485 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h"] Mar 13 12:38:00.362584 master-0 kubenswrapper[7518]: W0313 12:38:00.362535 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e365323_a8e3_4102_8819_23a135e158c7.slice/crio-f728b042127fc1c79a9d4fdf48b06a17319eb1d69920d155c5c7b4d2f599383d WatchSource:0}: Error finding container f728b042127fc1c79a9d4fdf48b06a17319eb1d69920d155c5c7b4d2f599383d: Status 404 returned error can't find the container with id f728b042127fc1c79a9d4fdf48b06a17319eb1d69920d155c5c7b4d2f599383d Mar 13 12:38:00.383035 master-0 kubenswrapper[7518]: I0313 12:38:00.382970 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" event={"ID":"bcf05594-4c10-4b54-a47c-d55e323f1f87","Type":"ContainerStarted","Data":"914d6236fd6885067cb3f7c4a3330427cd513d826dd28ffcdcc4fb60809af1e7"} Mar 13 12:38:00.384115 master-0 kubenswrapper[7518]: I0313 12:38:00.384059 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" event={"ID":"7e365323-a8e3-4102-8819-23a135e158c7","Type":"ContainerStarted","Data":"f728b042127fc1c79a9d4fdf48b06a17319eb1d69920d155c5c7b4d2f599383d"} Mar 13 12:38:00.385488 master-0 kubenswrapper[7518]: I0313 12:38:00.385438 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" event={"ID":"13f32761-b386-4f93-b3c0-b16ea53d338a","Type":"ContainerStarted","Data":"e7d5143ee528d1b1b82a3ddf6b2e4a81cfc844b962f0b1dce63b2e1946f0f7b1"} Mar 13 12:38:00.400225 master-0 kubenswrapper[7518]: I0313 12:38:00.399538 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" event={"ID":"00ebdf06-1f44-40cd-87e5-54195188b6d4","Type":"ContainerStarted","Data":"d48ca44a10dd4d84fe59c37cb0e8c494fdafd60a7b5212ea552414db0868ae46"} Mar 13 12:38:00.400225 master-0 kubenswrapper[7518]: I0313 12:38:00.400233 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:38:00.400437 master-0 kubenswrapper[7518]: I0313 12:38:00.400248 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" event={"ID":"00ebdf06-1f44-40cd-87e5-54195188b6d4","Type":"ContainerStarted","Data":"59a089d717b0ad489e3bb46d20afdc325daa4744efbf4f0fba3f417db654c103"} Mar 13 12:38:00.412476 master-0 kubenswrapper[7518]: I0313 12:38:00.412430 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" event={"ID":"915aabfe-1071-4bfc-b291-424304dfe7d8","Type":"ContainerStarted","Data":"5311a39dc81d1d16783435a3ecef9c5c22355991a2475e0b88664206b99f23cd"} Mar 13 12:38:00.412476 master-0 kubenswrapper[7518]: I0313 12:38:00.412481 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" event={"ID":"915aabfe-1071-4bfc-b291-424304dfe7d8","Type":"ContainerStarted","Data":"ac8d5b7e2908dcba283cf9e9752ebfd8422326f0c9542918621c9dc214262a7d"} Mar 13 12:38:00.413125 master-0 kubenswrapper[7518]: I0313 12:38:00.413103 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:38:00.414271 master-0 kubenswrapper[7518]: I0313 12:38:00.414248 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" event={"ID":"d3d998ee-b26f-4e30-83bc-f94f8c68060a","Type":"ContainerStarted","Data":"173b3354a692a16e1dac4e0c613765bd4dc76c18f400e62b22fb91f5a2c1aaca"} Mar 13 12:38:00.417597 master-0 kubenswrapper[7518]: I0313 12:38:00.417571 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-t8fb4_f0803181-4e37-43fa-8ddc-9c76d3f61817/openshift-config-operator/0.log" Mar 13 12:38:00.417998 master-0 kubenswrapper[7518]: I0313 12:38:00.417972 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" event={"ID":"f0803181-4e37-43fa-8ddc-9c76d3f61817","Type":"ContainerStarted","Data":"d775030cc9a2d771094d53b9310bcf873da42c7c6da6ec2e4bea962d923e448e"} Mar 13 12:38:00.418429 master-0 kubenswrapper[7518]: I0313 12:38:00.418403 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:38:00.429078 master-0 kubenswrapper[7518]: I0313 12:38:00.429028 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"f951e49f-91f7-42d3-bc63-8117cff68d7a","Type":"ContainerStarted","Data":"c7e76711c5edec7f8a2e0bbd4c766faceb828b179eb650bdec8d3d483da35ea8"} Mar 13 12:38:00.443331 master-0 kubenswrapper[7518]: I0313 12:38:00.438621 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-96gds" event={"ID":"4c0b18db-06ad-4d58-a353-f6fd96309dea","Type":"ContainerStarted","Data":"0fdc23a018e70f12d64abda9b21166b71dd0a6e62a76a56d6fb711404d01a3e9"} Mar 13 12:38:00.447487 master-0 kubenswrapper[7518]: I0313 12:38:00.447303 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" podStartSLOduration=3.447282778 podStartE2EDuration="3.447282778s" podCreationTimestamp="2026-03-13 12:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:00.424557644 +0000 UTC m=+35.057626831" watchObservedRunningTime="2026-03-13 12:38:00.447282778 +0000 UTC m=+35.080351985" Mar 13 12:38:00.474883 master-0 kubenswrapper[7518]: I0313 12:38:00.473907 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" podStartSLOduration=3.473733947 podStartE2EDuration="3.473733947s" podCreationTimestamp="2026-03-13 12:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:00.44873993 +0000 UTC m=+35.081809117" watchObservedRunningTime="2026-03-13 12:38:00.473733947 +0000 UTC m=+35.106803134" Mar 13 12:38:00.761278 master-0 kubenswrapper[7518]: I0313 12:38:00.759936 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=3.759912359 podStartE2EDuration="3.759912359s" podCreationTimestamp="2026-03-13 12:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:00.491370309 +0000 UTC m=+35.124439496" watchObservedRunningTime="2026-03-13 12:38:00.759912359 +0000 UTC m=+35.392981536" Mar 13 12:38:00.761278 master-0 kubenswrapper[7518]: I0313 12:38:00.760092 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-988c89bfb-rl6tb"] Mar 13 12:38:00.761278 master-0 kubenswrapper[7518]: E0313 12:38:00.760429 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" podUID="7ac704c6-e2a9-4a53-99d5-5be1db776558" Mar 13 12:38:00.865156 master-0 kubenswrapper[7518]: I0313 12:38:00.864880 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4"] Mar 13 12:38:00.865346 master-0 kubenswrapper[7518]: E0313 12:38:00.865184 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" podUID="75651bfd-ceaf-4bda-95a3-68ca11ec5abe" Mar 13 12:38:01.489330 master-0 kubenswrapper[7518]: I0313 12:38:01.487403 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:38:01.489330 master-0 kubenswrapper[7518]: I0313 12:38:01.488348 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:38:01.678321 master-0 kubenswrapper[7518]: I0313 12:38:01.674963 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 13 12:38:01.678321 master-0 kubenswrapper[7518]: I0313 12:38:01.675607 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 12:38:01.678321 master-0 kubenswrapper[7518]: I0313 12:38:01.676239 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00d2e134-62bb-4181-aa0a-22c9b9755b10-kube-api-access\") pod \"installer-1-master-0\" (UID: \"00d2e134-62bb-4181-aa0a-22c9b9755b10\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:38:01.678321 master-0 kubenswrapper[7518]: I0313 12:38:01.676334 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00d2e134-62bb-4181-aa0a-22c9b9755b10-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"00d2e134-62bb-4181-aa0a-22c9b9755b10\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:38:01.678321 master-0 kubenswrapper[7518]: I0313 12:38:01.676372 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/00d2e134-62bb-4181-aa0a-22c9b9755b10-var-lock\") pod \"installer-1-master-0\" (UID: \"00d2e134-62bb-4181-aa0a-22c9b9755b10\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:38:01.679079 master-0 kubenswrapper[7518]: I0313 12:38:01.679061 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 13 12:38:01.706988 master-0 kubenswrapper[7518]: I0313 12:38:01.688330 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 13 12:38:01.728603 master-0 kubenswrapper[7518]: I0313 12:38:01.728550 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:38:01.730111 master-0 kubenswrapper[7518]: I0313 12:38:01.730068 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:38:01.777660 master-0 kubenswrapper[7518]: I0313 12:38:01.777604 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00d2e134-62bb-4181-aa0a-22c9b9755b10-kube-api-access\") pod \"installer-1-master-0\" (UID: \"00d2e134-62bb-4181-aa0a-22c9b9755b10\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:38:01.777660 master-0 kubenswrapper[7518]: I0313 12:38:01.777662 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00d2e134-62bb-4181-aa0a-22c9b9755b10-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"00d2e134-62bb-4181-aa0a-22c9b9755b10\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:38:01.777905 master-0 kubenswrapper[7518]: I0313 12:38:01.777695 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/00d2e134-62bb-4181-aa0a-22c9b9755b10-var-lock\") pod \"installer-1-master-0\" (UID: \"00d2e134-62bb-4181-aa0a-22c9b9755b10\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:38:01.777905 master-0 kubenswrapper[7518]: I0313 12:38:01.777760 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/00d2e134-62bb-4181-aa0a-22c9b9755b10-var-lock\") pod \"installer-1-master-0\" (UID: \"00d2e134-62bb-4181-aa0a-22c9b9755b10\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:38:01.778279 master-0 kubenswrapper[7518]: I0313 12:38:01.778257 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00d2e134-62bb-4181-aa0a-22c9b9755b10-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"00d2e134-62bb-4181-aa0a-22c9b9755b10\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:38:01.887787 master-0 kubenswrapper[7518]: I0313 12:38:01.881476 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-proxy-ca-bundles\") pod \"7ac704c6-e2a9-4a53-99d5-5be1db776558\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " Mar 13 12:38:01.887787 master-0 kubenswrapper[7518]: I0313 12:38:01.881539 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-config\") pod \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " Mar 13 12:38:01.887787 master-0 kubenswrapper[7518]: I0313 12:38:01.881590 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert\") pod \"7ac704c6-e2a9-4a53-99d5-5be1db776558\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " Mar 13 12:38:01.887787 master-0 kubenswrapper[7518]: I0313 12:38:01.881672 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsf79\" (UniqueName: \"kubernetes.io/projected/7ac704c6-e2a9-4a53-99d5-5be1db776558-kube-api-access-hsf79\") pod \"7ac704c6-e2a9-4a53-99d5-5be1db776558\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " Mar 13 12:38:01.887787 master-0 kubenswrapper[7518]: I0313 12:38:01.881727 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb9t6\" (UniqueName: \"kubernetes.io/projected/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-kube-api-access-xb9t6\") pod \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\" (UID: \"75651bfd-ceaf-4bda-95a3-68ca11ec5abe\") " Mar 13 12:38:01.887787 master-0 kubenswrapper[7518]: I0313 12:38:01.881755 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-config\") pod \"7ac704c6-e2a9-4a53-99d5-5be1db776558\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " Mar 13 12:38:01.887787 master-0 kubenswrapper[7518]: I0313 12:38:01.885649 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00d2e134-62bb-4181-aa0a-22c9b9755b10-kube-api-access\") pod \"installer-1-master-0\" (UID: \"00d2e134-62bb-4181-aa0a-22c9b9755b10\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:38:01.887787 master-0 kubenswrapper[7518]: I0313 12:38:01.887510 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-config" (OuterVolumeSpecName: "config") pod "7ac704c6-e2a9-4a53-99d5-5be1db776558" (UID: "7ac704c6-e2a9-4a53-99d5-5be1db776558"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:01.887787 master-0 kubenswrapper[7518]: I0313 12:38:01.887728 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7ac704c6-e2a9-4a53-99d5-5be1db776558" (UID: "7ac704c6-e2a9-4a53-99d5-5be1db776558"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:38:01.888306 master-0 kubenswrapper[7518]: I0313 12:38:01.888196 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-config" (OuterVolumeSpecName: "config") pod "75651bfd-ceaf-4bda-95a3-68ca11ec5abe" (UID: "75651bfd-ceaf-4bda-95a3-68ca11ec5abe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:01.890923 master-0 kubenswrapper[7518]: I0313 12:38:01.890861 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-kube-api-access-xb9t6" (OuterVolumeSpecName: "kube-api-access-xb9t6") pod "75651bfd-ceaf-4bda-95a3-68ca11ec5abe" (UID: "75651bfd-ceaf-4bda-95a3-68ca11ec5abe"). InnerVolumeSpecName "kube-api-access-xb9t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:38:01.899092 master-0 kubenswrapper[7518]: I0313 12:38:01.898552 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7ac704c6-e2a9-4a53-99d5-5be1db776558" (UID: "7ac704c6-e2a9-4a53-99d5-5be1db776558"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:01.906736 master-0 kubenswrapper[7518]: I0313 12:38:01.906659 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac704c6-e2a9-4a53-99d5-5be1db776558-kube-api-access-hsf79" (OuterVolumeSpecName: "kube-api-access-hsf79") pod "7ac704c6-e2a9-4a53-99d5-5be1db776558" (UID: "7ac704c6-e2a9-4a53-99d5-5be1db776558"). InnerVolumeSpecName "kube-api-access-hsf79". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:38:01.982604 master-0 kubenswrapper[7518]: I0313 12:38:01.982539 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:38:01.982604 master-0 kubenswrapper[7518]: I0313 12:38:01.982617 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsf79\" (UniqueName: \"kubernetes.io/projected/7ac704c6-e2a9-4a53-99d5-5be1db776558-kube-api-access-hsf79\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:01.982845 master-0 kubenswrapper[7518]: I0313 12:38:01.982631 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb9t6\" (UniqueName: \"kubernetes.io/projected/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-kube-api-access-xb9t6\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:01.982845 master-0 kubenswrapper[7518]: I0313 12:38:01.982644 7518 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:01.982845 master-0 kubenswrapper[7518]: I0313 12:38:01.982665 7518 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:01.982845 master-0 kubenswrapper[7518]: I0313 12:38:01.982678 7518 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:01.982845 master-0 kubenswrapper[7518]: I0313 12:38:01.982688 7518 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ac704c6-e2a9-4a53-99d5-5be1db776558-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:01.983665 master-0 kubenswrapper[7518]: I0313 12:38:01.983574 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca\") pod \"controller-manager-988c89bfb-rl6tb\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:38:02.053061 master-0 kubenswrapper[7518]: I0313 12:38:02.053001 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 12:38:02.119306 master-0 kubenswrapper[7518]: I0313 12:38:02.117776 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7cbf874688-d4wjw"] Mar 13 12:38:02.119306 master-0 kubenswrapper[7518]: E0313 12:38:02.118268 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" podUID="06ec7805-846f-4256-894e-7638c7ad85a3" Mar 13 12:38:02.198189 master-0 kubenswrapper[7518]: I0313 12:38:02.189196 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca\") pod \"7ac704c6-e2a9-4a53-99d5-5be1db776558\" (UID: \"7ac704c6-e2a9-4a53-99d5-5be1db776558\") " Mar 13 12:38:02.198189 master-0 kubenswrapper[7518]: I0313 12:38:02.189402 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-audit\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:38:02.198189 master-0 kubenswrapper[7518]: I0313 12:38:02.189488 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-serving-cert\") pod \"apiserver-7cbf874688-d4wjw\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:38:02.198189 master-0 kubenswrapper[7518]: E0313 12:38:02.189595 7518 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:38:02.198189 master-0 kubenswrapper[7518]: E0313 12:38:02.189664 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-serving-cert podName:06ec7805-846f-4256-894e-7638c7ad85a3 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:06.18965018 +0000 UTC m=+40.822719367 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-serving-cert") pod "apiserver-7cbf874688-d4wjw" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3") : secret "serving-cert" not found Mar 13 12:38:02.198189 master-0 kubenswrapper[7518]: E0313 12:38:02.190359 7518 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 12:38:02.198189 master-0 kubenswrapper[7518]: E0313 12:38:02.190397 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-audit podName:06ec7805-846f-4256-894e-7638c7ad85a3 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:06.19038646 +0000 UTC m=+40.823455647 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-audit") pod "apiserver-7cbf874688-d4wjw" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3") : configmap "audit-0" not found Mar 13 12:38:02.198189 master-0 kubenswrapper[7518]: I0313 12:38:02.190576 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca" (OuterVolumeSpecName: "client-ca") pod "7ac704c6-e2a9-4a53-99d5-5be1db776558" (UID: "7ac704c6-e2a9-4a53-99d5-5be1db776558"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:02.290895 master-0 kubenswrapper[7518]: I0313 12:38:02.290551 7518 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ac704c6-e2a9-4a53-99d5-5be1db776558-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:02.322621 master-0 kubenswrapper[7518]: I0313 12:38:02.322516 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 13 12:38:02.490884 master-0 kubenswrapper[7518]: I0313 12:38:02.490831 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"00d2e134-62bb-4181-aa0a-22c9b9755b10","Type":"ContainerStarted","Data":"dd20eff6c17b5d26b931e6d943bd09e05bef7d7025ee5b4bd9d525e64901dc81"} Mar 13 12:38:02.490884 master-0 kubenswrapper[7518]: I0313 12:38:02.490894 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4" Mar 13 12:38:02.500073 master-0 kubenswrapper[7518]: I0313 12:38:02.496283 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-988c89bfb-rl6tb" Mar 13 12:38:02.500073 master-0 kubenswrapper[7518]: I0313 12:38:02.496291 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:38:02.525528 master-0 kubenswrapper[7518]: I0313 12:38:02.525471 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:38:02.545670 master-0 kubenswrapper[7518]: I0313 12:38:02.545610 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f"] Mar 13 12:38:02.546097 master-0 kubenswrapper[7518]: I0313 12:38:02.546071 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4"] Mar 13 12:38:02.546223 master-0 kubenswrapper[7518]: I0313 12:38:02.546188 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:02.550268 master-0 kubenswrapper[7518]: I0313 12:38:02.549790 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 12:38:02.552048 master-0 kubenswrapper[7518]: I0313 12:38:02.551996 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f"] Mar 13 12:38:02.553230 master-0 kubenswrapper[7518]: I0313 12:38:02.553187 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f4b89bbb-sb5x4"] Mar 13 12:38:02.555598 master-0 kubenswrapper[7518]: I0313 12:38:02.555565 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 12:38:02.556098 master-0 kubenswrapper[7518]: I0313 12:38:02.556003 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 12:38:02.556200 master-0 kubenswrapper[7518]: I0313 12:38:02.556179 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 12:38:02.556373 master-0 kubenswrapper[7518]: I0313 12:38:02.556347 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 12:38:02.605075 master-0 kubenswrapper[7518]: I0313 12:38:02.605011 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-988c89bfb-rl6tb"] Mar 13 12:38:02.606021 master-0 kubenswrapper[7518]: I0313 12:38:02.605988 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-988c89bfb-rl6tb"] Mar 13 12:38:02.698596 master-0 kubenswrapper[7518]: I0313 12:38:02.698558 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06ec7805-846f-4256-894e-7638c7ad85a3-audit-dir\") pod \"06ec7805-846f-4256-894e-7638c7ad85a3\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " Mar 13 12:38:02.698810 master-0 kubenswrapper[7518]: I0313 12:38:02.698610 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-config\") pod \"06ec7805-846f-4256-894e-7638c7ad85a3\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " Mar 13 12:38:02.698810 master-0 kubenswrapper[7518]: I0313 12:38:02.698627 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06ec7805-846f-4256-894e-7638c7ad85a3-node-pullsecrets\") pod \"06ec7805-846f-4256-894e-7638c7ad85a3\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " Mar 13 12:38:02.698810 master-0 kubenswrapper[7518]: I0313 12:38:02.698653 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-encryption-config\") pod \"06ec7805-846f-4256-894e-7638c7ad85a3\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " Mar 13 12:38:02.698810 master-0 kubenswrapper[7518]: I0313 12:38:02.698683 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-etcd-serving-ca\") pod \"06ec7805-846f-4256-894e-7638c7ad85a3\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " Mar 13 12:38:02.698810 master-0 kubenswrapper[7518]: I0313 12:38:02.698717 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-trusted-ca-bundle\") pod \"06ec7805-846f-4256-894e-7638c7ad85a3\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " Mar 13 12:38:02.698810 master-0 kubenswrapper[7518]: I0313 12:38:02.698752 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06ec7805-846f-4256-894e-7638c7ad85a3-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "06ec7805-846f-4256-894e-7638c7ad85a3" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:02.699049 master-0 kubenswrapper[7518]: I0313 12:38:02.699010 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06ec7805-846f-4256-894e-7638c7ad85a3-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "06ec7805-846f-4256-894e-7638c7ad85a3" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:02.699719 master-0 kubenswrapper[7518]: I0313 12:38:02.699129 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-etcd-client\") pod \"06ec7805-846f-4256-894e-7638c7ad85a3\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " Mar 13 12:38:02.699719 master-0 kubenswrapper[7518]: I0313 12:38:02.699166 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "06ec7805-846f-4256-894e-7638c7ad85a3" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:02.699719 master-0 kubenswrapper[7518]: I0313 12:38:02.699247 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-image-import-ca\") pod \"06ec7805-846f-4256-894e-7638c7ad85a3\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " Mar 13 12:38:02.699719 master-0 kubenswrapper[7518]: I0313 12:38:02.699233 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-config" (OuterVolumeSpecName: "config") pod "06ec7805-846f-4256-894e-7638c7ad85a3" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:02.699719 master-0 kubenswrapper[7518]: I0313 12:38:02.699274 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qsbn\" (UniqueName: \"kubernetes.io/projected/06ec7805-846f-4256-894e-7638c7ad85a3-kube-api-access-9qsbn\") pod \"06ec7805-846f-4256-894e-7638c7ad85a3\" (UID: \"06ec7805-846f-4256-894e-7638c7ad85a3\") " Mar 13 12:38:02.699719 master-0 kubenswrapper[7518]: I0313 12:38:02.699562 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-serving-cert\") pod \"route-controller-manager-b799f66dc-95b9f\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:02.699940 master-0 kubenswrapper[7518]: I0313 12:38:02.699826 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "06ec7805-846f-4256-894e-7638c7ad85a3" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:02.699940 master-0 kubenswrapper[7518]: I0313 12:38:02.699821 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "06ec7805-846f-4256-894e-7638c7ad85a3" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:02.700111 master-0 kubenswrapper[7518]: I0313 12:38:02.700049 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-config\") pod \"route-controller-manager-b799f66dc-95b9f\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:02.700530 master-0 kubenswrapper[7518]: I0313 12:38:02.700263 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-client-ca\") pod \"route-controller-manager-b799f66dc-95b9f\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:02.700530 master-0 kubenswrapper[7518]: I0313 12:38:02.700289 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sbfp\" (UniqueName: \"kubernetes.io/projected/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-kube-api-access-5sbfp\") pod \"route-controller-manager-b799f66dc-95b9f\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:02.700530 master-0 kubenswrapper[7518]: I0313 12:38:02.700357 7518 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:02.700530 master-0 kubenswrapper[7518]: I0313 12:38:02.700375 7518 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:02.700530 master-0 kubenswrapper[7518]: I0313 12:38:02.700387 7518 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:02.700530 master-0 kubenswrapper[7518]: I0313 12:38:02.700399 7518 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06ec7805-846f-4256-894e-7638c7ad85a3-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:02.700530 master-0 kubenswrapper[7518]: I0313 12:38:02.700411 7518 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:02.700530 master-0 kubenswrapper[7518]: I0313 12:38:02.700423 7518 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06ec7805-846f-4256-894e-7638c7ad85a3-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:02.700530 master-0 kubenswrapper[7518]: I0313 12:38:02.700434 7518 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75651bfd-ceaf-4bda-95a3-68ca11ec5abe-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:02.700530 master-0 kubenswrapper[7518]: I0313 12:38:02.700445 7518 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:02.704250 master-0 kubenswrapper[7518]: I0313 12:38:02.704224 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ec7805-846f-4256-894e-7638c7ad85a3-kube-api-access-9qsbn" (OuterVolumeSpecName: "kube-api-access-9qsbn") pod "06ec7805-846f-4256-894e-7638c7ad85a3" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3"). InnerVolumeSpecName "kube-api-access-9qsbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:38:02.705573 master-0 kubenswrapper[7518]: I0313 12:38:02.705390 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "06ec7805-846f-4256-894e-7638c7ad85a3" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:38:02.713655 master-0 kubenswrapper[7518]: I0313 12:38:02.713617 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "06ec7805-846f-4256-894e-7638c7ad85a3" (UID: "06ec7805-846f-4256-894e-7638c7ad85a3"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:38:02.801158 master-0 kubenswrapper[7518]: I0313 12:38:02.801085 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-serving-cert\") pod \"route-controller-manager-b799f66dc-95b9f\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:02.801332 master-0 kubenswrapper[7518]: I0313 12:38:02.801296 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-config\") pod \"route-controller-manager-b799f66dc-95b9f\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:02.801411 master-0 kubenswrapper[7518]: I0313 12:38:02.801387 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-client-ca\") pod \"route-controller-manager-b799f66dc-95b9f\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:02.801744 master-0 kubenswrapper[7518]: I0313 12:38:02.801687 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sbfp\" (UniqueName: \"kubernetes.io/projected/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-kube-api-access-5sbfp\") pod \"route-controller-manager-b799f66dc-95b9f\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:02.801844 master-0 kubenswrapper[7518]: I0313 12:38:02.801823 7518 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:02.801884 master-0 kubenswrapper[7518]: I0313 12:38:02.801846 7518 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:02.801914 master-0 kubenswrapper[7518]: I0313 12:38:02.801891 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qsbn\" (UniqueName: \"kubernetes.io/projected/06ec7805-846f-4256-894e-7638c7ad85a3-kube-api-access-9qsbn\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:02.802340 master-0 kubenswrapper[7518]: I0313 12:38:02.802317 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-client-ca\") pod \"route-controller-manager-b799f66dc-95b9f\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:02.803788 master-0 kubenswrapper[7518]: I0313 12:38:02.803742 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-config\") pod \"route-controller-manager-b799f66dc-95b9f\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:02.804351 master-0 kubenswrapper[7518]: I0313 12:38:02.804290 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-serving-cert\") pod \"route-controller-manager-b799f66dc-95b9f\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:02.819012 master-0 kubenswrapper[7518]: I0313 12:38:02.818967 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:38:02.832422 master-0 kubenswrapper[7518]: I0313 12:38:02.832185 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sbfp\" (UniqueName: \"kubernetes.io/projected/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-kube-api-access-5sbfp\") pod \"route-controller-manager-b799f66dc-95b9f\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:02.871857 master-0 kubenswrapper[7518]: I0313 12:38:02.871797 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:03.132839 master-0 kubenswrapper[7518]: I0313 12:38:03.132453 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f"] Mar 13 12:38:03.495788 master-0 kubenswrapper[7518]: I0313 12:38:03.495642 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7cbf874688-d4wjw" Mar 13 12:38:03.495788 master-0 kubenswrapper[7518]: I0313 12:38:03.495646 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"00d2e134-62bb-4181-aa0a-22c9b9755b10","Type":"ContainerStarted","Data":"1b3f3325d5e04c56ba72e3fc00c285b339f3ca147fcedd9041b736950ddeb5fa"} Mar 13 12:38:03.512011 master-0 kubenswrapper[7518]: I0313 12:38:03.511958 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=2.511938974 podStartE2EDuration="2.511938974s" podCreationTimestamp="2026-03-13 12:38:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:03.51090398 +0000 UTC m=+38.143973167" watchObservedRunningTime="2026-03-13 12:38:03.511938974 +0000 UTC m=+38.145008161" Mar 13 12:38:03.537237 master-0 kubenswrapper[7518]: I0313 12:38:03.537193 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7cbf874688-d4wjw"] Mar 13 12:38:03.596792 master-0 kubenswrapper[7518]: I0313 12:38:03.596757 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-7cbf874688-d4wjw"] Mar 13 12:38:03.605116 master-0 kubenswrapper[7518]: I0313 12:38:03.602941 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ec7805-846f-4256-894e-7638c7ad85a3" path="/var/lib/kubelet/pods/06ec7805-846f-4256-894e-7638c7ad85a3/volumes" Mar 13 12:38:03.605116 master-0 kubenswrapper[7518]: I0313 12:38:03.603284 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75651bfd-ceaf-4bda-95a3-68ca11ec5abe" path="/var/lib/kubelet/pods/75651bfd-ceaf-4bda-95a3-68ca11ec5abe/volumes" Mar 13 12:38:03.605116 master-0 kubenswrapper[7518]: I0313 12:38:03.603625 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ac704c6-e2a9-4a53-99d5-5be1db776558" path="/var/lib/kubelet/pods/7ac704c6-e2a9-4a53-99d5-5be1db776558/volumes" Mar 13 12:38:03.630644 master-0 kubenswrapper[7518]: I0313 12:38:03.630598 7518 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/06ec7805-846f-4256-894e-7638c7ad85a3-audit\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:03.630644 master-0 kubenswrapper[7518]: I0313 12:38:03.630628 7518 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06ec7805-846f-4256-894e-7638c7ad85a3-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:03.651598 master-0 kubenswrapper[7518]: W0313 12:38:03.651549 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod654bcdbb_82a6_4927_acb4_cd6f1d6ccc9e.slice/crio-52319bb4328d419dddcfd98978ce03c54f6355e317c6c887e05cbba64bedbf53 WatchSource:0}: Error finding container 52319bb4328d419dddcfd98978ce03c54f6355e317c6c887e05cbba64bedbf53: Status 404 returned error can't find the container with id 52319bb4328d419dddcfd98978ce03c54f6355e317c6c887e05cbba64bedbf53 Mar 13 12:38:03.798562 master-0 kubenswrapper[7518]: I0313 12:38:03.798528 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:38:04.501127 master-0 kubenswrapper[7518]: I0313 12:38:04.501006 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" event={"ID":"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e","Type":"ContainerStarted","Data":"52319bb4328d419dddcfd98978ce03c54f6355e317c6c887e05cbba64bedbf53"} Mar 13 12:38:05.233309 master-0 kubenswrapper[7518]: I0313 12:38:05.232690 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h"] Mar 13 12:38:05.460918 master-0 kubenswrapper[7518]: I0313 12:38:05.454352 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 12:38:05.460918 master-0 kubenswrapper[7518]: I0313 12:38:05.454556 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="f951e49f-91f7-42d3-bc63-8117cff68d7a" containerName="installer" containerID="cri-o://c7e76711c5edec7f8a2e0bbd4c766faceb828b179eb650bdec8d3d483da35ea8" gracePeriod=30 Mar 13 12:38:05.478705 master-0 kubenswrapper[7518]: I0313 12:38:05.478646 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f4cd854d4-4p7j6"] Mar 13 12:38:05.480255 master-0 kubenswrapper[7518]: I0313 12:38:05.479341 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.486649 master-0 kubenswrapper[7518]: I0313 12:38:05.484949 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:38:05.486649 master-0 kubenswrapper[7518]: I0313 12:38:05.485250 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:38:05.486649 master-0 kubenswrapper[7518]: I0313 12:38:05.486370 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:38:05.486649 master-0 kubenswrapper[7518]: I0313 12:38:05.486509 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:38:05.487430 master-0 kubenswrapper[7518]: I0313 12:38:05.487405 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:38:05.489416 master-0 kubenswrapper[7518]: I0313 12:38:05.488701 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-844bc54c88-vznst"] Mar 13 12:38:05.492964 master-0 kubenswrapper[7518]: I0313 12:38:05.491269 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.492964 master-0 kubenswrapper[7518]: I0313 12:38:05.492351 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f4cd854d4-4p7j6"] Mar 13 12:38:05.495410 master-0 kubenswrapper[7518]: I0313 12:38:05.494558 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 12:38:05.495410 master-0 kubenswrapper[7518]: I0313 12:38:05.494719 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 12:38:05.495410 master-0 kubenswrapper[7518]: I0313 12:38:05.494754 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 12:38:05.495410 master-0 kubenswrapper[7518]: I0313 12:38:05.494795 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 12:38:05.495410 master-0 kubenswrapper[7518]: I0313 12:38:05.494893 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 12:38:05.495410 master-0 kubenswrapper[7518]: I0313 12:38:05.494972 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 12:38:05.495410 master-0 kubenswrapper[7518]: I0313 12:38:05.495081 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 12:38:05.495410 master-0 kubenswrapper[7518]: I0313 12:38:05.495302 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 12:38:05.495410 master-0 kubenswrapper[7518]: I0313 12:38:05.495304 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:38:05.495857 master-0 kubenswrapper[7518]: I0313 12:38:05.495466 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 12:38:05.498261 master-0 kubenswrapper[7518]: I0313 12:38:05.497880 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-844bc54c88-vznst"] Mar 13 12:38:05.509982 master-0 kubenswrapper[7518]: I0313 12:38:05.509929 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 12:38:05.655965 master-0 kubenswrapper[7518]: I0313 12:38:05.655910 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2f48243b-6b05-4efa-8420-58a4419622bf-node-pullsecrets\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.655965 master-0 kubenswrapper[7518]: I0313 12:38:05.655973 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-etcd-serving-ca\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.656242 master-0 kubenswrapper[7518]: I0313 12:38:05.655996 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-encryption-config\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.656242 master-0 kubenswrapper[7518]: I0313 12:38:05.656034 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhddd\" (UniqueName: \"kubernetes.io/projected/2f48243b-6b05-4efa-8420-58a4419622bf-kube-api-access-qhddd\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.656242 master-0 kubenswrapper[7518]: I0313 12:38:05.656122 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-proxy-ca-bundles\") pod \"controller-manager-f4cd854d4-4p7j6\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.656242 master-0 kubenswrapper[7518]: I0313 12:38:05.656224 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-trusted-ca-bundle\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.656430 master-0 kubenswrapper[7518]: I0313 12:38:05.656253 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-etcd-client\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.656430 master-0 kubenswrapper[7518]: I0313 12:38:05.656343 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-image-import-ca\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.656430 master-0 kubenswrapper[7518]: I0313 12:38:05.656377 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-config\") pod \"controller-manager-f4cd854d4-4p7j6\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.656430 master-0 kubenswrapper[7518]: I0313 12:38:05.656398 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62zx7\" (UniqueName: \"kubernetes.io/projected/6606f89b-0e8d-4b65-8642-ff84d93df419-kube-api-access-62zx7\") pod \"controller-manager-f4cd854d4-4p7j6\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.656548 master-0 kubenswrapper[7518]: I0313 12:38:05.656487 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-serving-cert\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.656548 master-0 kubenswrapper[7518]: I0313 12:38:05.656520 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-audit\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.656605 master-0 kubenswrapper[7518]: I0313 12:38:05.656556 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-client-ca\") pod \"controller-manager-f4cd854d4-4p7j6\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.656605 master-0 kubenswrapper[7518]: I0313 12:38:05.656591 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f48243b-6b05-4efa-8420-58a4419622bf-audit-dir\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.656753 master-0 kubenswrapper[7518]: I0313 12:38:05.656670 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6606f89b-0e8d-4b65-8642-ff84d93df419-serving-cert\") pod \"controller-manager-f4cd854d4-4p7j6\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.656753 master-0 kubenswrapper[7518]: I0313 12:38:05.656746 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-config\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.758207 master-0 kubenswrapper[7518]: I0313 12:38:05.758041 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2f48243b-6b05-4efa-8420-58a4419622bf-node-pullsecrets\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.758207 master-0 kubenswrapper[7518]: I0313 12:38:05.758099 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-etcd-serving-ca\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.758431 master-0 kubenswrapper[7518]: I0313 12:38:05.758287 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-encryption-config\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.758431 master-0 kubenswrapper[7518]: I0313 12:38:05.758325 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhddd\" (UniqueName: \"kubernetes.io/projected/2f48243b-6b05-4efa-8420-58a4419622bf-kube-api-access-qhddd\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.758431 master-0 kubenswrapper[7518]: I0313 12:38:05.758330 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2f48243b-6b05-4efa-8420-58a4419622bf-node-pullsecrets\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.758431 master-0 kubenswrapper[7518]: I0313 12:38:05.758352 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-proxy-ca-bundles\") pod \"controller-manager-f4cd854d4-4p7j6\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.758845 master-0 kubenswrapper[7518]: I0313 12:38:05.758804 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-trusted-ca-bundle\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.758915 master-0 kubenswrapper[7518]: I0313 12:38:05.758865 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-etcd-client\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.758915 master-0 kubenswrapper[7518]: I0313 12:38:05.758907 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-image-import-ca\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.758977 master-0 kubenswrapper[7518]: I0313 12:38:05.758934 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62zx7\" (UniqueName: \"kubernetes.io/projected/6606f89b-0e8d-4b65-8642-ff84d93df419-kube-api-access-62zx7\") pod \"controller-manager-f4cd854d4-4p7j6\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.758977 master-0 kubenswrapper[7518]: I0313 12:38:05.758959 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-config\") pod \"controller-manager-f4cd854d4-4p7j6\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.759032 master-0 kubenswrapper[7518]: I0313 12:38:05.758997 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-serving-cert\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.759032 master-0 kubenswrapper[7518]: I0313 12:38:05.759013 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-etcd-serving-ca\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.759032 master-0 kubenswrapper[7518]: I0313 12:38:05.759023 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-audit\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.759118 master-0 kubenswrapper[7518]: I0313 12:38:05.759061 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-client-ca\") pod \"controller-manager-f4cd854d4-4p7j6\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.759118 master-0 kubenswrapper[7518]: I0313 12:38:05.759099 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f48243b-6b05-4efa-8420-58a4419622bf-audit-dir\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.759505 master-0 kubenswrapper[7518]: I0313 12:38:05.759479 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6606f89b-0e8d-4b65-8642-ff84d93df419-serving-cert\") pod \"controller-manager-f4cd854d4-4p7j6\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.759556 master-0 kubenswrapper[7518]: I0313 12:38:05.759528 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-config\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.759695 master-0 kubenswrapper[7518]: I0313 12:38:05.759654 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-trusted-ca-bundle\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.759738 master-0 kubenswrapper[7518]: I0313 12:38:05.759669 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f48243b-6b05-4efa-8420-58a4419622bf-audit-dir\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.760605 master-0 kubenswrapper[7518]: I0313 12:38:05.760408 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-client-ca\") pod \"controller-manager-f4cd854d4-4p7j6\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.760985 master-0 kubenswrapper[7518]: I0313 12:38:05.760763 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-config\") pod \"controller-manager-f4cd854d4-4p7j6\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.761030 master-0 kubenswrapper[7518]: I0313 12:38:05.761001 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-image-import-ca\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.761575 master-0 kubenswrapper[7518]: I0313 12:38:05.761469 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-audit\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.762014 master-0 kubenswrapper[7518]: I0313 12:38:05.761666 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-config\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.762563 master-0 kubenswrapper[7518]: I0313 12:38:05.762528 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-proxy-ca-bundles\") pod \"controller-manager-f4cd854d4-4p7j6\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.764089 master-0 kubenswrapper[7518]: I0313 12:38:05.763912 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-encryption-config\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.774224 master-0 kubenswrapper[7518]: I0313 12:38:05.765650 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-serving-cert\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.777599 master-0 kubenswrapper[7518]: I0313 12:38:05.777531 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62zx7\" (UniqueName: \"kubernetes.io/projected/6606f89b-0e8d-4b65-8642-ff84d93df419-kube-api-access-62zx7\") pod \"controller-manager-f4cd854d4-4p7j6\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.777824 master-0 kubenswrapper[7518]: I0313 12:38:05.777669 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6606f89b-0e8d-4b65-8642-ff84d93df419-serving-cert\") pod \"controller-manager-f4cd854d4-4p7j6\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.777824 master-0 kubenswrapper[7518]: I0313 12:38:05.777739 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-etcd-client\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.778843 master-0 kubenswrapper[7518]: I0313 12:38:05.778824 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhddd\" (UniqueName: \"kubernetes.io/projected/2f48243b-6b05-4efa-8420-58a4419622bf-kube-api-access-qhddd\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:05.818212 master-0 kubenswrapper[7518]: I0313 12:38:05.818151 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:05.837450 master-0 kubenswrapper[7518]: I0313 12:38:05.837386 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:07.995882 master-0 kubenswrapper[7518]: I0313 12:38:07.995792 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:38:08.075969 master-0 kubenswrapper[7518]: I0313 12:38:08.075903 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:38:08.966951 master-0 kubenswrapper[7518]: I0313 12:38:08.966880 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 12:38:08.967620 master-0 kubenswrapper[7518]: I0313 12:38:08.967590 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:38:08.990513 master-0 kubenswrapper[7518]: I0313 12:38:08.990114 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 12:38:09.020581 master-0 kubenswrapper[7518]: I0313 12:38:09.020541 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6166c97c-6620-4e9d-8a48-b4c7a4f16655-var-lock\") pod \"installer-2-master-0\" (UID: \"6166c97c-6620-4e9d-8a48-b4c7a4f16655\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:38:09.021046 master-0 kubenswrapper[7518]: I0313 12:38:09.020607 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6166c97c-6620-4e9d-8a48-b4c7a4f16655-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6166c97c-6620-4e9d-8a48-b4c7a4f16655\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:38:09.021046 master-0 kubenswrapper[7518]: I0313 12:38:09.020633 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6166c97c-6620-4e9d-8a48-b4c7a4f16655-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"6166c97c-6620-4e9d-8a48-b4c7a4f16655\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:38:09.121796 master-0 kubenswrapper[7518]: I0313 12:38:09.121730 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6166c97c-6620-4e9d-8a48-b4c7a4f16655-var-lock\") pod \"installer-2-master-0\" (UID: \"6166c97c-6620-4e9d-8a48-b4c7a4f16655\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:38:09.121796 master-0 kubenswrapper[7518]: I0313 12:38:09.121804 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6166c97c-6620-4e9d-8a48-b4c7a4f16655-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6166c97c-6620-4e9d-8a48-b4c7a4f16655\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:38:09.122078 master-0 kubenswrapper[7518]: I0313 12:38:09.121824 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6166c97c-6620-4e9d-8a48-b4c7a4f16655-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"6166c97c-6620-4e9d-8a48-b4c7a4f16655\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:38:09.122078 master-0 kubenswrapper[7518]: I0313 12:38:09.121928 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6166c97c-6620-4e9d-8a48-b4c7a4f16655-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"6166c97c-6620-4e9d-8a48-b4c7a4f16655\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:38:09.122078 master-0 kubenswrapper[7518]: I0313 12:38:09.121965 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6166c97c-6620-4e9d-8a48-b4c7a4f16655-var-lock\") pod \"installer-2-master-0\" (UID: \"6166c97c-6620-4e9d-8a48-b4c7a4f16655\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:38:09.158079 master-0 kubenswrapper[7518]: I0313 12:38:09.157943 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6166c97c-6620-4e9d-8a48-b4c7a4f16655-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6166c97c-6620-4e9d-8a48-b4c7a4f16655\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:38:09.304449 master-0 kubenswrapper[7518]: I0313 12:38:09.303849 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:38:13.245219 master-0 kubenswrapper[7518]: I0313 12:38:13.243444 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f4cd854d4-4p7j6"] Mar 13 12:38:13.259762 master-0 kubenswrapper[7518]: I0313 12:38:13.256117 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 12:38:13.290508 master-0 kubenswrapper[7518]: I0313 12:38:13.287700 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-844bc54c88-vznst"] Mar 13 12:38:13.574647 master-0 kubenswrapper[7518]: I0313 12:38:13.574374 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"6166c97c-6620-4e9d-8a48-b4c7a4f16655","Type":"ContainerStarted","Data":"5688ba0f04496abbd2c32f535d92f55333a99737f9952fe0e71ea00e1f40f9f4"} Mar 13 12:38:13.577865 master-0 kubenswrapper[7518]: I0313 12:38:13.577822 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" event={"ID":"3020d236-03e0-4916-97dd-f1085632ca43","Type":"ContainerStarted","Data":"89639adb88716cbb87bdb25b40c5ec231bc4f7820ddcadae78f527661f5a5581"} Mar 13 12:38:13.605346 master-0 kubenswrapper[7518]: I0313 12:38:13.605234 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:38:13.605346 master-0 kubenswrapper[7518]: I0313 12:38:13.605266 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" event={"ID":"d3d998ee-b26f-4e30-83bc-f94f8c68060a","Type":"ContainerStarted","Data":"2678ae1f026392d01bc32426edbdfbe31df6907392fe5e29e35b3e44ffb8f896"} Mar 13 12:38:13.605346 master-0 kubenswrapper[7518]: I0313 12:38:13.605282 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" event={"ID":"bcf05594-4c10-4b54-a47c-d55e323f1f87","Type":"ContainerStarted","Data":"f4a916875b5dd7f287df508905d5d99ad3dbd91629a2c95a805f4ab66aa7996e"} Mar 13 12:38:13.606582 master-0 kubenswrapper[7518]: I0313 12:38:13.606559 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" event={"ID":"2f79578c-bbfb-4968-893a-730deb4c01f9","Type":"ContainerStarted","Data":"e4eaaa3731593381d12110c6e2099dda3ebd595d5815de35f934386e31a23ac2"} Mar 13 12:38:13.606634 master-0 kubenswrapper[7518]: I0313 12:38:13.606584 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" event={"ID":"2f79578c-bbfb-4968-893a-730deb4c01f9","Type":"ContainerStarted","Data":"6a8c75c694096fc8dedc129901064fbff36d84f9daf7b91e5a68c2b191c60f00"} Mar 13 12:38:13.607350 master-0 kubenswrapper[7518]: I0313 12:38:13.607319 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:38:13.608342 master-0 kubenswrapper[7518]: I0313 12:38:13.608306 7518 generic.go:334] "Generic (PLEG): container finished" podID="7e365323-a8e3-4102-8819-23a135e158c7" containerID="269c1b753463142fb826f63479c6b05e9171eca4f44f90cb55c18d2b7e026be9" exitCode=0 Mar 13 12:38:13.608427 master-0 kubenswrapper[7518]: I0313 12:38:13.608362 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" event={"ID":"7e365323-a8e3-4102-8819-23a135e158c7","Type":"ContainerDied","Data":"269c1b753463142fb826f63479c6b05e9171eca4f44f90cb55c18d2b7e026be9"} Mar 13 12:38:13.610071 master-0 kubenswrapper[7518]: I0313 12:38:13.610046 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" event={"ID":"f39d7f76-0075-44c3-9101-eb2607cb176a","Type":"ContainerStarted","Data":"596641d0ab55b5854707a6848930e7fad02440d9e89e0be41c608e76df02736c"} Mar 13 12:38:13.611853 master-0 kubenswrapper[7518]: I0313 12:38:13.611824 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" event={"ID":"13f32761-b386-4f93-b3c0-b16ea53d338a","Type":"ContainerStarted","Data":"10f573bd76e1ec58325bbbaaa195b6fbd1edfe3a3c1f526554f73ca8edd60be0"} Mar 13 12:38:13.613229 master-0 kubenswrapper[7518]: I0313 12:38:13.613198 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-96gds" event={"ID":"4c0b18db-06ad-4d58-a353-f6fd96309dea","Type":"ContainerStarted","Data":"c79a1fdbba512b9f4f21a08ea7612d350b0579fc0951d1d8b0ae9fc5bc23fc15"} Mar 13 12:38:13.614742 master-0 kubenswrapper[7518]: I0313 12:38:13.614710 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-844bc54c88-vznst" event={"ID":"2f48243b-6b05-4efa-8420-58a4419622bf","Type":"ContainerStarted","Data":"aa04b90f16ed80e22ecfe4066cdbfb20ddc6e64977b5d63203a00d19ce4e1333"} Mar 13 12:38:13.615854 master-0 kubenswrapper[7518]: I0313 12:38:13.615834 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" event={"ID":"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e","Type":"ContainerStarted","Data":"7bc2f5971ed0e7d425c40799b23b57efccbd201a6b95beb9c1e2d82560d76c5e"} Mar 13 12:38:13.616356 master-0 kubenswrapper[7518]: I0313 12:38:13.616340 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:13.625151 master-0 kubenswrapper[7518]: I0313 12:38:13.625084 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" event={"ID":"6606f89b-0e8d-4b65-8642-ff84d93df419","Type":"ContainerStarted","Data":"d9a8b9c70043384d15c1e55e57952d35350012b93752bf32b8085c7fe3d04b51"} Mar 13 12:38:13.686054 master-0 kubenswrapper[7518]: I0313 12:38:13.682314 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-6tlzf"] Mar 13 12:38:13.686872 master-0 kubenswrapper[7518]: I0313 12:38:13.686828 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.800174 master-0 kubenswrapper[7518]: I0313 12:38:13.789385 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-sys\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.800174 master-0 kubenswrapper[7518]: I0313 12:38:13.789432 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-host\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.800174 master-0 kubenswrapper[7518]: I0313 12:38:13.789534 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-run\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.800174 master-0 kubenswrapper[7518]: I0313 12:38:13.789560 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6h9f\" (UniqueName: \"kubernetes.io/projected/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-kube-api-access-p6h9f\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.800174 master-0 kubenswrapper[7518]: I0313 12:38:13.789579 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-var-lib-kubelet\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.800174 master-0 kubenswrapper[7518]: I0313 12:38:13.789595 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysconfig\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.800174 master-0 kubenswrapper[7518]: I0313 12:38:13.789611 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-tuned\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.800174 master-0 kubenswrapper[7518]: I0313 12:38:13.789629 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-modprobe-d\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.800174 master-0 kubenswrapper[7518]: I0313 12:38:13.789654 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysctl-conf\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.800174 master-0 kubenswrapper[7518]: I0313 12:38:13.789671 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-lib-modules\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.800174 master-0 kubenswrapper[7518]: I0313 12:38:13.789690 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-systemd\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.800174 master-0 kubenswrapper[7518]: I0313 12:38:13.789711 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-kubernetes\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.800174 master-0 kubenswrapper[7518]: I0313 12:38:13.789729 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysctl-d\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.800174 master-0 kubenswrapper[7518]: I0313 12:38:13.789744 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-tmp\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.828172 master-0 kubenswrapper[7518]: I0313 12:38:13.823721 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:38:13.891277 master-0 kubenswrapper[7518]: I0313 12:38:13.891084 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-trusted-ca-bundle\") pod \"7e365323-a8e3-4102-8819-23a135e158c7\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " Mar 13 12:38:13.891277 master-0 kubenswrapper[7518]: I0313 12:38:13.891149 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e365323-a8e3-4102-8819-23a135e158c7-audit-dir\") pod \"7e365323-a8e3-4102-8819-23a135e158c7\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " Mar 13 12:38:13.891490 master-0 kubenswrapper[7518]: I0313 12:38:13.891346 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e365323-a8e3-4102-8819-23a135e158c7-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7e365323-a8e3-4102-8819-23a135e158c7" (UID: "7e365323-a8e3-4102-8819-23a135e158c7"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.891740 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7e365323-a8e3-4102-8819-23a135e158c7" (UID: "7e365323-a8e3-4102-8819-23a135e158c7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.891809 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-audit-policies\") pod \"7e365323-a8e3-4102-8819-23a135e158c7\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.891835 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-encryption-config\") pod \"7e365323-a8e3-4102-8819-23a135e158c7\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.891877 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-etcd-client\") pod \"7e365323-a8e3-4102-8819-23a135e158c7\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.891903 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmbjg\" (UniqueName: \"kubernetes.io/projected/7e365323-a8e3-4102-8819-23a135e158c7-kube-api-access-xmbjg\") pod \"7e365323-a8e3-4102-8819-23a135e158c7\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.891939 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-etcd-serving-ca\") pod \"7e365323-a8e3-4102-8819-23a135e158c7\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.891959 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-serving-cert\") pod \"7e365323-a8e3-4102-8819-23a135e158c7\" (UID: \"7e365323-a8e3-4102-8819-23a135e158c7\") " Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892086 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-kubernetes\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892104 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysctl-d\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892119 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-tmp\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892202 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-sys\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892222 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-host\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892308 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-run\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892330 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6h9f\" (UniqueName: \"kubernetes.io/projected/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-kube-api-access-p6h9f\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892346 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-var-lib-kubelet\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892365 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysconfig\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892381 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-tuned\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892397 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-modprobe-d\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892426 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysctl-conf\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892461 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-lib-modules\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892481 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-systemd\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892519 7518 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892530 7518 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e365323-a8e3-4102-8819-23a135e158c7-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892625 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-systemd\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.892864 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-sys\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.893046 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysconfig\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.893298 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-host\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.893330 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-run\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.893502 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysctl-conf\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.893548 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-modprobe-d\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.893803 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-var-lib-kubelet\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.894050 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-lib-modules\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.894166 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-kubernetes\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.894363 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "7e365323-a8e3-4102-8819-23a135e158c7" (UID: "7e365323-a8e3-4102-8819-23a135e158c7"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.894435 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysctl-d\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.895168 master-0 kubenswrapper[7518]: I0313 12:38:13.895005 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7e365323-a8e3-4102-8819-23a135e158c7" (UID: "7e365323-a8e3-4102-8819-23a135e158c7"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:13.902165 master-0 kubenswrapper[7518]: I0313 12:38:13.900397 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "7e365323-a8e3-4102-8819-23a135e158c7" (UID: "7e365323-a8e3-4102-8819-23a135e158c7"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:38:13.902165 master-0 kubenswrapper[7518]: I0313 12:38:13.900434 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7e365323-a8e3-4102-8819-23a135e158c7" (UID: "7e365323-a8e3-4102-8819-23a135e158c7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:38:13.902165 master-0 kubenswrapper[7518]: I0313 12:38:13.900486 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "7e365323-a8e3-4102-8819-23a135e158c7" (UID: "7e365323-a8e3-4102-8819-23a135e158c7"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:38:13.902165 master-0 kubenswrapper[7518]: I0313 12:38:13.900658 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e365323-a8e3-4102-8819-23a135e158c7-kube-api-access-xmbjg" (OuterVolumeSpecName: "kube-api-access-xmbjg") pod "7e365323-a8e3-4102-8819-23a135e158c7" (UID: "7e365323-a8e3-4102-8819-23a135e158c7"). InnerVolumeSpecName "kube-api-access-xmbjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:38:13.902165 master-0 kubenswrapper[7518]: I0313 12:38:13.900689 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-tmp\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.902455 master-0 kubenswrapper[7518]: I0313 12:38:13.902212 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-tuned\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.930097 master-0 kubenswrapper[7518]: I0313 12:38:13.929984 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6h9f\" (UniqueName: \"kubernetes.io/projected/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-kube-api-access-p6h9f\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:13.994770 master-0 kubenswrapper[7518]: I0313 12:38:13.993972 7518 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:13.994770 master-0 kubenswrapper[7518]: I0313 12:38:13.994012 7518 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:13.994770 master-0 kubenswrapper[7518]: I0313 12:38:13.994025 7518 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e365323-a8e3-4102-8819-23a135e158c7-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:13.994770 master-0 kubenswrapper[7518]: I0313 12:38:13.994036 7518 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:13.994770 master-0 kubenswrapper[7518]: I0313 12:38:13.994048 7518 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e365323-a8e3-4102-8819-23a135e158c7-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:13.994770 master-0 kubenswrapper[7518]: I0313 12:38:13.994060 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmbjg\" (UniqueName: \"kubernetes.io/projected/7e365323-a8e3-4102-8819-23a135e158c7-kube-api-access-xmbjg\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:14.030320 master-0 kubenswrapper[7518]: I0313 12:38:14.025256 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:38:14.041183 master-0 kubenswrapper[7518]: W0313 12:38:14.040749 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf83e0d3e_1f73_4727_8ee3_375cbb9e36f8.slice/crio-8745b06e77d15d348da4f427d604b8a7c180026a94da1773da1989c30e90c7db WatchSource:0}: Error finding container 8745b06e77d15d348da4f427d604b8a7c180026a94da1773da1989c30e90c7db: Status 404 returned error can't find the container with id 8745b06e77d15d348da4f427d604b8a7c180026a94da1773da1989c30e90c7db Mar 13 12:38:14.263302 master-0 kubenswrapper[7518]: I0313 12:38:14.253231 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" podStartSLOduration=4.597188439 podStartE2EDuration="14.253211943s" podCreationTimestamp="2026-03-13 12:38:00 +0000 UTC" firstStartedPulling="2026-03-13 12:38:03.654414041 +0000 UTC m=+38.287483228" lastFinishedPulling="2026-03-13 12:38:13.310437545 +0000 UTC m=+47.943506732" observedRunningTime="2026-03-13 12:38:14.098523191 +0000 UTC m=+48.731592378" watchObservedRunningTime="2026-03-13 12:38:14.253211943 +0000 UTC m=+48.886281130" Mar 13 12:38:14.617606 master-0 kubenswrapper[7518]: I0313 12:38:14.617370 7518 patch_prober.go:28] interesting pod/route-controller-manager-b799f66dc-95b9f container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:38:14.617606 master-0 kubenswrapper[7518]: I0313 12:38:14.617453 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" podUID="654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:38:14.630467 master-0 kubenswrapper[7518]: I0313 12:38:14.630215 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"6166c97c-6620-4e9d-8a48-b4c7a4f16655","Type":"ContainerStarted","Data":"ae0e9659169982a5b7c5488429c100c7be383ce8123c3c261a0427ad33ba68ff"} Mar 13 12:38:14.632569 master-0 kubenswrapper[7518]: I0313 12:38:14.632234 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" event={"ID":"13f32761-b386-4f93-b3c0-b16ea53d338a","Type":"ContainerStarted","Data":"d51fee17dbfb0eec9eaf519f46cc4d5178179416265bf9a12b7bc6a8c53d370d"} Mar 13 12:38:14.637880 master-0 kubenswrapper[7518]: I0313 12:38:14.637204 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-96gds" event={"ID":"4c0b18db-06ad-4d58-a353-f6fd96309dea","Type":"ContainerStarted","Data":"9553cf75735bf17d389fb1088bd8b8e97d7600ab1818e3680fca777b7afeaa50"} Mar 13 12:38:14.640019 master-0 kubenswrapper[7518]: I0313 12:38:14.639622 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" event={"ID":"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8","Type":"ContainerStarted","Data":"bad2d07bb82330781db372ce3cb023a5ce6517adf12e502e87d5588a03cd62a3"} Mar 13 12:38:14.640019 master-0 kubenswrapper[7518]: I0313 12:38:14.639649 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" event={"ID":"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8","Type":"ContainerStarted","Data":"8745b06e77d15d348da4f427d604b8a7c180026a94da1773da1989c30e90c7db"} Mar 13 12:38:14.642933 master-0 kubenswrapper[7518]: I0313 12:38:14.642179 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" event={"ID":"7e365323-a8e3-4102-8819-23a135e158c7","Type":"ContainerDied","Data":"f728b042127fc1c79a9d4fdf48b06a17319eb1d69920d155c5c7b4d2f599383d"} Mar 13 12:38:14.642933 master-0 kubenswrapper[7518]: I0313 12:38:14.642274 7518 scope.go:117] "RemoveContainer" containerID="269c1b753463142fb826f63479c6b05e9171eca4f44f90cb55c18d2b7e026be9" Mar 13 12:38:14.643092 master-0 kubenswrapper[7518]: I0313 12:38:14.643003 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h" Mar 13 12:38:15.028166 master-0 kubenswrapper[7518]: I0313 12:38:15.025460 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:15.085172 master-0 kubenswrapper[7518]: I0313 12:38:15.084011 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" podStartSLOduration=2.08398923 podStartE2EDuration="2.08398923s" podCreationTimestamp="2026-03-13 12:38:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:15.024254147 +0000 UTC m=+49.657323334" watchObservedRunningTime="2026-03-13 12:38:15.08398923 +0000 UTC m=+49.717058417" Mar 13 12:38:15.175263 master-0 kubenswrapper[7518]: I0313 12:38:15.175195 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n"] Mar 13 12:38:15.175506 master-0 kubenswrapper[7518]: E0313 12:38:15.175395 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e365323-a8e3-4102-8819-23a135e158c7" containerName="fix-audit-permissions" Mar 13 12:38:15.175506 master-0 kubenswrapper[7518]: I0313 12:38:15.175429 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e365323-a8e3-4102-8819-23a135e158c7" containerName="fix-audit-permissions" Mar 13 12:38:15.175506 master-0 kubenswrapper[7518]: I0313 12:38:15.175502 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e365323-a8e3-4102-8819-23a135e158c7" containerName="fix-audit-permissions" Mar 13 12:38:15.175959 master-0 kubenswrapper[7518]: I0313 12:38:15.175923 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.180089 master-0 kubenswrapper[7518]: I0313 12:38:15.179985 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 12:38:15.180382 master-0 kubenswrapper[7518]: I0313 12:38:15.180353 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 12:38:15.180879 master-0 kubenswrapper[7518]: I0313 12:38:15.180514 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 12:38:15.180879 master-0 kubenswrapper[7518]: I0313 12:38:15.180669 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 12:38:15.180879 master-0 kubenswrapper[7518]: I0313 12:38:15.180816 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 12:38:15.181102 master-0 kubenswrapper[7518]: I0313 12:38:15.181066 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 12:38:15.181577 master-0 kubenswrapper[7518]: I0313 12:38:15.181339 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 12:38:15.181577 master-0 kubenswrapper[7518]: I0313 12:38:15.181504 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 12:38:15.199283 master-0 kubenswrapper[7518]: I0313 12:38:15.198753 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h"] Mar 13 12:38:15.199283 master-0 kubenswrapper[7518]: I0313 12:38:15.198819 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n"] Mar 13 12:38:15.206534 master-0 kubenswrapper[7518]: I0313 12:38:15.206478 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-6b7d89b46f-h7w7h"] Mar 13 12:38:15.208436 master-0 kubenswrapper[7518]: I0313 12:38:15.208400 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-etcd-client\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.208538 master-0 kubenswrapper[7518]: I0313 12:38:15.208445 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-audit-policies\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.208538 master-0 kubenswrapper[7518]: I0313 12:38:15.208474 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-encryption-config\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.208538 master-0 kubenswrapper[7518]: I0313 12:38:15.208522 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-etcd-serving-ca\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.208683 master-0 kubenswrapper[7518]: I0313 12:38:15.208586 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-trusted-ca-bundle\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.208683 master-0 kubenswrapper[7518]: I0313 12:38:15.208634 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4q4x\" (UniqueName: \"kubernetes.io/projected/c4477be6-bcff-407a-8033-b005e19bf5d6-kube-api-access-d4q4x\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.208767 master-0 kubenswrapper[7518]: I0313 12:38:15.208693 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4477be6-bcff-407a-8033-b005e19bf5d6-audit-dir\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.208767 master-0 kubenswrapper[7518]: I0313 12:38:15.208720 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-serving-cert\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.311484 master-0 kubenswrapper[7518]: I0313 12:38:15.310494 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-etcd-serving-ca\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.311484 master-0 kubenswrapper[7518]: I0313 12:38:15.310560 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-trusted-ca-bundle\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.311484 master-0 kubenswrapper[7518]: I0313 12:38:15.310601 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4q4x\" (UniqueName: \"kubernetes.io/projected/c4477be6-bcff-407a-8033-b005e19bf5d6-kube-api-access-d4q4x\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.311484 master-0 kubenswrapper[7518]: I0313 12:38:15.310671 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4477be6-bcff-407a-8033-b005e19bf5d6-audit-dir\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.311484 master-0 kubenswrapper[7518]: I0313 12:38:15.310706 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-serving-cert\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.311484 master-0 kubenswrapper[7518]: I0313 12:38:15.310734 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-etcd-client\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.311484 master-0 kubenswrapper[7518]: I0313 12:38:15.310776 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-audit-policies\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.311484 master-0 kubenswrapper[7518]: I0313 12:38:15.310801 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-encryption-config\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.312280 master-0 kubenswrapper[7518]: I0313 12:38:15.311845 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4477be6-bcff-407a-8033-b005e19bf5d6-audit-dir\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.312280 master-0 kubenswrapper[7518]: I0313 12:38:15.312204 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-trusted-ca-bundle\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.314869 master-0 kubenswrapper[7518]: I0313 12:38:15.313022 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-etcd-serving-ca\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.316079 master-0 kubenswrapper[7518]: I0313 12:38:15.315642 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-audit-policies\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.316079 master-0 kubenswrapper[7518]: I0313 12:38:15.315795 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-encryption-config\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.318383 master-0 kubenswrapper[7518]: I0313 12:38:15.316898 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-serving-cert\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.335182 master-0 kubenswrapper[7518]: I0313 12:38:15.332718 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-etcd-client\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.337278 master-0 kubenswrapper[7518]: I0313 12:38:15.337181 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=7.337158067 podStartE2EDuration="7.337158067s" podCreationTimestamp="2026-03-13 12:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:15.318393682 +0000 UTC m=+49.951462879" watchObservedRunningTime="2026-03-13 12:38:15.337158067 +0000 UTC m=+49.970227264" Mar 13 12:38:15.344987 master-0 kubenswrapper[7518]: I0313 12:38:15.343217 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-m7k6m"] Mar 13 12:38:15.344987 master-0 kubenswrapper[7518]: I0313 12:38:15.344027 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m7k6m" Mar 13 12:38:15.347376 master-0 kubenswrapper[7518]: I0313 12:38:15.347353 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 12:38:15.347983 master-0 kubenswrapper[7518]: I0313 12:38:15.347713 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 12:38:15.347983 master-0 kubenswrapper[7518]: I0313 12:38:15.347823 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 12:38:15.348901 master-0 kubenswrapper[7518]: I0313 12:38:15.348881 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 12:38:15.356922 master-0 kubenswrapper[7518]: I0313 12:38:15.356876 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4q4x\" (UniqueName: \"kubernetes.io/projected/c4477be6-bcff-407a-8033-b005e19bf5d6-kube-api-access-d4q4x\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.364314 master-0 kubenswrapper[7518]: I0313 12:38:15.364167 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m7k6m"] Mar 13 12:38:15.411862 master-0 kubenswrapper[7518]: I0313 12:38:15.411697 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxjbd\" (UniqueName: \"kubernetes.io/projected/ef42b65e-2d92-46ac-baaf-30e213787781-kube-api-access-xxjbd\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:38:15.411862 master-0 kubenswrapper[7518]: I0313 12:38:15.411787 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef42b65e-2d92-46ac-baaf-30e213787781-metrics-tls\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:38:15.411862 master-0 kubenswrapper[7518]: I0313 12:38:15.411821 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef42b65e-2d92-46ac-baaf-30e213787781-config-volume\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:38:15.459323 master-0 kubenswrapper[7518]: I0313 12:38:15.459084 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 12:38:15.513556 master-0 kubenswrapper[7518]: I0313 12:38:15.513506 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxjbd\" (UniqueName: \"kubernetes.io/projected/ef42b65e-2d92-46ac-baaf-30e213787781-kube-api-access-xxjbd\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:38:15.513740 master-0 kubenswrapper[7518]: I0313 12:38:15.513589 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef42b65e-2d92-46ac-baaf-30e213787781-metrics-tls\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:38:15.513740 master-0 kubenswrapper[7518]: E0313 12:38:15.513721 7518 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 13 12:38:15.513805 master-0 kubenswrapper[7518]: E0313 12:38:15.513775 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef42b65e-2d92-46ac-baaf-30e213787781-metrics-tls podName:ef42b65e-2d92-46ac-baaf-30e213787781 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:16.013758265 +0000 UTC m=+50.646827452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ef42b65e-2d92-46ac-baaf-30e213787781-metrics-tls") pod "dns-default-m7k6m" (UID: "ef42b65e-2d92-46ac-baaf-30e213787781") : secret "dns-default-metrics-tls" not found Mar 13 12:38:15.513937 master-0 kubenswrapper[7518]: I0313 12:38:15.513905 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef42b65e-2d92-46ac-baaf-30e213787781-config-volume\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:38:15.514685 master-0 kubenswrapper[7518]: I0313 12:38:15.514658 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef42b65e-2d92-46ac-baaf-30e213787781-config-volume\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:38:15.515434 master-0 kubenswrapper[7518]: I0313 12:38:15.515410 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:15.531861 master-0 kubenswrapper[7518]: I0313 12:38:15.531823 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxjbd\" (UniqueName: \"kubernetes.io/projected/ef42b65e-2d92-46ac-baaf-30e213787781-kube-api-access-xxjbd\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:38:15.621222 master-0 kubenswrapper[7518]: I0313 12:38:15.606985 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e365323-a8e3-4102-8819-23a135e158c7" path="/var/lib/kubelet/pods/7e365323-a8e3-4102-8819-23a135e158c7/volumes" Mar 13 12:38:15.776856 master-0 kubenswrapper[7518]: I0313 12:38:15.774880 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n"] Mar 13 12:38:15.786480 master-0 kubenswrapper[7518]: W0313 12:38:15.786319 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4477be6_bcff_407a_8033_b005e19bf5d6.slice/crio-5c9d522ae739e2277c0296ac70334b7f1898acab312dd9c5c15576df36650d2b WatchSource:0}: Error finding container 5c9d522ae739e2277c0296ac70334b7f1898acab312dd9c5c15576df36650d2b: Status 404 returned error can't find the container with id 5c9d522ae739e2277c0296ac70334b7f1898acab312dd9c5c15576df36650d2b Mar 13 12:38:15.838686 master-0 kubenswrapper[7518]: I0313 12:38:15.838458 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xpz47"] Mar 13 12:38:15.839341 master-0 kubenswrapper[7518]: I0313 12:38:15.839315 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xpz47" Mar 13 12:38:15.924899 master-0 kubenswrapper[7518]: I0313 12:38:15.924656 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13710582-eac3-42e5-b28a-8b4fd3030af2-hosts-file\") pod \"node-resolver-xpz47\" (UID: \"13710582-eac3-42e5-b28a-8b4fd3030af2\") " pod="openshift-dns/node-resolver-xpz47" Mar 13 12:38:15.924899 master-0 kubenswrapper[7518]: I0313 12:38:15.924795 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpfv9\" (UniqueName: \"kubernetes.io/projected/13710582-eac3-42e5-b28a-8b4fd3030af2-kube-api-access-vpfv9\") pod \"node-resolver-xpz47\" (UID: \"13710582-eac3-42e5-b28a-8b4fd3030af2\") " pod="openshift-dns/node-resolver-xpz47" Mar 13 12:38:16.029203 master-0 kubenswrapper[7518]: I0313 12:38:16.026523 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13710582-eac3-42e5-b28a-8b4fd3030af2-hosts-file\") pod \"node-resolver-xpz47\" (UID: \"13710582-eac3-42e5-b28a-8b4fd3030af2\") " pod="openshift-dns/node-resolver-xpz47" Mar 13 12:38:16.029203 master-0 kubenswrapper[7518]: I0313 12:38:16.026641 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpfv9\" (UniqueName: \"kubernetes.io/projected/13710582-eac3-42e5-b28a-8b4fd3030af2-kube-api-access-vpfv9\") pod \"node-resolver-xpz47\" (UID: \"13710582-eac3-42e5-b28a-8b4fd3030af2\") " pod="openshift-dns/node-resolver-xpz47" Mar 13 12:38:16.029203 master-0 kubenswrapper[7518]: I0313 12:38:16.026672 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef42b65e-2d92-46ac-baaf-30e213787781-metrics-tls\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:38:16.029203 master-0 kubenswrapper[7518]: I0313 12:38:16.027418 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13710582-eac3-42e5-b28a-8b4fd3030af2-hosts-file\") pod \"node-resolver-xpz47\" (UID: \"13710582-eac3-42e5-b28a-8b4fd3030af2\") " pod="openshift-dns/node-resolver-xpz47" Mar 13 12:38:16.035688 master-0 kubenswrapper[7518]: I0313 12:38:16.035646 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef42b65e-2d92-46ac-baaf-30e213787781-metrics-tls\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:38:16.044756 master-0 kubenswrapper[7518]: I0313 12:38:16.044684 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpfv9\" (UniqueName: \"kubernetes.io/projected/13710582-eac3-42e5-b28a-8b4fd3030af2-kube-api-access-vpfv9\") pod \"node-resolver-xpz47\" (UID: \"13710582-eac3-42e5-b28a-8b4fd3030af2\") " pod="openshift-dns/node-resolver-xpz47" Mar 13 12:38:16.166437 master-0 kubenswrapper[7518]: I0313 12:38:16.166311 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xpz47" Mar 13 12:38:16.181745 master-0 kubenswrapper[7518]: W0313 12:38:16.181702 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13710582_eac3_42e5_b28a_8b4fd3030af2.slice/crio-0efd5eb82a3bcc3e1df342102496e59fd5b2f395bc25671cea43a0422444ad1d WatchSource:0}: Error finding container 0efd5eb82a3bcc3e1df342102496e59fd5b2f395bc25671cea43a0422444ad1d: Status 404 returned error can't find the container with id 0efd5eb82a3bcc3e1df342102496e59fd5b2f395bc25671cea43a0422444ad1d Mar 13 12:38:16.287595 master-0 kubenswrapper[7518]: I0313 12:38:16.287308 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m7k6m" Mar 13 12:38:16.528176 master-0 kubenswrapper[7518]: I0313 12:38:16.528117 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m7k6m"] Mar 13 12:38:16.682094 master-0 kubenswrapper[7518]: I0313 12:38:16.681980 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xpz47" event={"ID":"13710582-eac3-42e5-b28a-8b4fd3030af2","Type":"ContainerStarted","Data":"e9c3c633dc4a84b66c2199f60a12db1d162beea3c490a022dd30a5db1e44b646"} Mar 13 12:38:16.682094 master-0 kubenswrapper[7518]: I0313 12:38:16.682044 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xpz47" event={"ID":"13710582-eac3-42e5-b28a-8b4fd3030af2","Type":"ContainerStarted","Data":"0efd5eb82a3bcc3e1df342102496e59fd5b2f395bc25671cea43a0422444ad1d"} Mar 13 12:38:16.683378 master-0 kubenswrapper[7518]: I0313 12:38:16.683338 7518 generic.go:334] "Generic (PLEG): container finished" podID="c4477be6-bcff-407a-8033-b005e19bf5d6" containerID="19df84242808542fbc20d31e0f31a46482d39271b53107a4006c786dc0871be1" exitCode=0 Mar 13 12:38:16.683449 master-0 kubenswrapper[7518]: I0313 12:38:16.683389 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" event={"ID":"c4477be6-bcff-407a-8033-b005e19bf5d6","Type":"ContainerDied","Data":"19df84242808542fbc20d31e0f31a46482d39271b53107a4006c786dc0871be1"} Mar 13 12:38:16.683449 master-0 kubenswrapper[7518]: I0313 12:38:16.683415 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" event={"ID":"c4477be6-bcff-407a-8033-b005e19bf5d6","Type":"ContainerStarted","Data":"5c9d522ae739e2277c0296ac70334b7f1898acab312dd9c5c15576df36650d2b"} Mar 13 12:38:16.683776 master-0 kubenswrapper[7518]: I0313 12:38:16.683741 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="6166c97c-6620-4e9d-8a48-b4c7a4f16655" containerName="installer" containerID="cri-o://ae0e9659169982a5b7c5488429c100c7be383ce8123c3c261a0427ad33ba68ff" gracePeriod=30 Mar 13 12:38:16.760285 master-0 kubenswrapper[7518]: I0313 12:38:16.760212 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xpz47" podStartSLOduration=1.7601938320000001 podStartE2EDuration="1.760193832s" podCreationTimestamp="2026-03-13 12:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:16.758300553 +0000 UTC m=+51.391369760" watchObservedRunningTime="2026-03-13 12:38:16.760193832 +0000 UTC m=+51.393263169" Mar 13 12:38:17.863916 master-0 kubenswrapper[7518]: I0313 12:38:17.863825 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 12:38:17.865158 master-0 kubenswrapper[7518]: I0313 12:38:17.864435 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:38:17.882667 master-0 kubenswrapper[7518]: I0313 12:38:17.882622 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 12:38:17.954402 master-0 kubenswrapper[7518]: I0313 12:38:17.954348 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfabb495-1707-4c3d-b00e-2f3b2976fb92-kube-api-access\") pod \"installer-3-master-0\" (UID: \"bfabb495-1707-4c3d-b00e-2f3b2976fb92\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:38:17.954402 master-0 kubenswrapper[7518]: I0313 12:38:17.954415 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bfabb495-1707-4c3d-b00e-2f3b2976fb92-var-lock\") pod \"installer-3-master-0\" (UID: \"bfabb495-1707-4c3d-b00e-2f3b2976fb92\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:38:17.954933 master-0 kubenswrapper[7518]: I0313 12:38:17.954435 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfabb495-1707-4c3d-b00e-2f3b2976fb92-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"bfabb495-1707-4c3d-b00e-2f3b2976fb92\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:38:18.057960 master-0 kubenswrapper[7518]: I0313 12:38:18.056500 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfabb495-1707-4c3d-b00e-2f3b2976fb92-kube-api-access\") pod \"installer-3-master-0\" (UID: \"bfabb495-1707-4c3d-b00e-2f3b2976fb92\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:38:18.057960 master-0 kubenswrapper[7518]: I0313 12:38:18.057389 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bfabb495-1707-4c3d-b00e-2f3b2976fb92-var-lock\") pod \"installer-3-master-0\" (UID: \"bfabb495-1707-4c3d-b00e-2f3b2976fb92\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:38:18.057960 master-0 kubenswrapper[7518]: I0313 12:38:18.057428 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfabb495-1707-4c3d-b00e-2f3b2976fb92-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"bfabb495-1707-4c3d-b00e-2f3b2976fb92\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:38:18.057960 master-0 kubenswrapper[7518]: I0313 12:38:18.057568 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfabb495-1707-4c3d-b00e-2f3b2976fb92-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"bfabb495-1707-4c3d-b00e-2f3b2976fb92\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:38:18.057960 master-0 kubenswrapper[7518]: I0313 12:38:18.057607 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bfabb495-1707-4c3d-b00e-2f3b2976fb92-var-lock\") pod \"installer-3-master-0\" (UID: \"bfabb495-1707-4c3d-b00e-2f3b2976fb92\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:38:18.076842 master-0 kubenswrapper[7518]: I0313 12:38:18.076770 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfabb495-1707-4c3d-b00e-2f3b2976fb92-kube-api-access\") pod \"installer-3-master-0\" (UID: \"bfabb495-1707-4c3d-b00e-2f3b2976fb92\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:38:18.188856 master-0 kubenswrapper[7518]: I0313 12:38:18.188799 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:38:18.647949 master-0 kubenswrapper[7518]: W0313 12:38:18.647892 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef42b65e_2d92_46ac_baaf_30e213787781.slice/crio-caf607baad46071737a7ad295cff2dc8569126a9cada0edb3e0461efe66c6a52 WatchSource:0}: Error finding container caf607baad46071737a7ad295cff2dc8569126a9cada0edb3e0461efe66c6a52: Status 404 returned error can't find the container with id caf607baad46071737a7ad295cff2dc8569126a9cada0edb3e0461efe66c6a52 Mar 13 12:38:18.695052 master-0 kubenswrapper[7518]: I0313 12:38:18.695027 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_6166c97c-6620-4e9d-8a48-b4c7a4f16655/installer/0.log" Mar 13 12:38:18.695233 master-0 kubenswrapper[7518]: I0313 12:38:18.695096 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:38:18.696669 master-0 kubenswrapper[7518]: I0313 12:38:18.696384 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m7k6m" event={"ID":"ef42b65e-2d92-46ac-baaf-30e213787781","Type":"ContainerStarted","Data":"caf607baad46071737a7ad295cff2dc8569126a9cada0edb3e0461efe66c6a52"} Mar 13 12:38:18.703829 master-0 kubenswrapper[7518]: I0313 12:38:18.702901 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_6166c97c-6620-4e9d-8a48-b4c7a4f16655/installer/0.log" Mar 13 12:38:18.703829 master-0 kubenswrapper[7518]: I0313 12:38:18.702972 7518 generic.go:334] "Generic (PLEG): container finished" podID="6166c97c-6620-4e9d-8a48-b4c7a4f16655" containerID="ae0e9659169982a5b7c5488429c100c7be383ce8123c3c261a0427ad33ba68ff" exitCode=1 Mar 13 12:38:18.703829 master-0 kubenswrapper[7518]: I0313 12:38:18.703012 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"6166c97c-6620-4e9d-8a48-b4c7a4f16655","Type":"ContainerDied","Data":"ae0e9659169982a5b7c5488429c100c7be383ce8123c3c261a0427ad33ba68ff"} Mar 13 12:38:18.703829 master-0 kubenswrapper[7518]: I0313 12:38:18.703046 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"6166c97c-6620-4e9d-8a48-b4c7a4f16655","Type":"ContainerDied","Data":"5688ba0f04496abbd2c32f535d92f55333a99737f9952fe0e71ea00e1f40f9f4"} Mar 13 12:38:18.703829 master-0 kubenswrapper[7518]: I0313 12:38:18.703073 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:38:18.703829 master-0 kubenswrapper[7518]: I0313 12:38:18.703078 7518 scope.go:117] "RemoveContainer" containerID="ae0e9659169982a5b7c5488429c100c7be383ce8123c3c261a0427ad33ba68ff" Mar 13 12:38:18.727553 master-0 kubenswrapper[7518]: I0313 12:38:18.727502 7518 scope.go:117] "RemoveContainer" containerID="ae0e9659169982a5b7c5488429c100c7be383ce8123c3c261a0427ad33ba68ff" Mar 13 12:38:18.728052 master-0 kubenswrapper[7518]: E0313 12:38:18.727958 7518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae0e9659169982a5b7c5488429c100c7be383ce8123c3c261a0427ad33ba68ff\": container with ID starting with ae0e9659169982a5b7c5488429c100c7be383ce8123c3c261a0427ad33ba68ff not found: ID does not exist" containerID="ae0e9659169982a5b7c5488429c100c7be383ce8123c3c261a0427ad33ba68ff" Mar 13 12:38:18.728052 master-0 kubenswrapper[7518]: I0313 12:38:18.728013 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae0e9659169982a5b7c5488429c100c7be383ce8123c3c261a0427ad33ba68ff"} err="failed to get container status \"ae0e9659169982a5b7c5488429c100c7be383ce8123c3c261a0427ad33ba68ff\": rpc error: code = NotFound desc = could not find container \"ae0e9659169982a5b7c5488429c100c7be383ce8123c3c261a0427ad33ba68ff\": container with ID starting with ae0e9659169982a5b7c5488429c100c7be383ce8123c3c261a0427ad33ba68ff not found: ID does not exist" Mar 13 12:38:18.765308 master-0 kubenswrapper[7518]: I0313 12:38:18.765266 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6166c97c-6620-4e9d-8a48-b4c7a4f16655-var-lock\") pod \"6166c97c-6620-4e9d-8a48-b4c7a4f16655\" (UID: \"6166c97c-6620-4e9d-8a48-b4c7a4f16655\") " Mar 13 12:38:18.765475 master-0 kubenswrapper[7518]: I0313 12:38:18.765362 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6166c97c-6620-4e9d-8a48-b4c7a4f16655-kubelet-dir\") pod \"6166c97c-6620-4e9d-8a48-b4c7a4f16655\" (UID: \"6166c97c-6620-4e9d-8a48-b4c7a4f16655\") " Mar 13 12:38:18.765475 master-0 kubenswrapper[7518]: I0313 12:38:18.765395 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6166c97c-6620-4e9d-8a48-b4c7a4f16655-kube-api-access\") pod \"6166c97c-6620-4e9d-8a48-b4c7a4f16655\" (UID: \"6166c97c-6620-4e9d-8a48-b4c7a4f16655\") " Mar 13 12:38:18.766613 master-0 kubenswrapper[7518]: I0313 12:38:18.766556 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6166c97c-6620-4e9d-8a48-b4c7a4f16655-var-lock" (OuterVolumeSpecName: "var-lock") pod "6166c97c-6620-4e9d-8a48-b4c7a4f16655" (UID: "6166c97c-6620-4e9d-8a48-b4c7a4f16655"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:18.766742 master-0 kubenswrapper[7518]: I0313 12:38:18.766723 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6166c97c-6620-4e9d-8a48-b4c7a4f16655-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6166c97c-6620-4e9d-8a48-b4c7a4f16655" (UID: "6166c97c-6620-4e9d-8a48-b4c7a4f16655"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:18.776667 master-0 kubenswrapper[7518]: I0313 12:38:18.773420 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6166c97c-6620-4e9d-8a48-b4c7a4f16655-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6166c97c-6620-4e9d-8a48-b4c7a4f16655" (UID: "6166c97c-6620-4e9d-8a48-b4c7a4f16655"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:38:18.866925 master-0 kubenswrapper[7518]: I0313 12:38:18.866866 7518 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6166c97c-6620-4e9d-8a48-b4c7a4f16655-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:18.866925 master-0 kubenswrapper[7518]: I0313 12:38:18.866901 7518 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6166c97c-6620-4e9d-8a48-b4c7a4f16655-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:18.866925 master-0 kubenswrapper[7518]: I0313 12:38:18.866913 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6166c97c-6620-4e9d-8a48-b4c7a4f16655-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:19.041910 master-0 kubenswrapper[7518]: I0313 12:38:19.041861 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 12:38:19.047371 master-0 kubenswrapper[7518]: I0313 12:38:19.047332 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 12:38:19.076519 master-0 kubenswrapper[7518]: I0313 12:38:19.076472 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 12:38:19.078876 master-0 kubenswrapper[7518]: W0313 12:38:19.078846 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbfabb495_1707_4c3d_b00e_2f3b2976fb92.slice/crio-8de98c25946553f78d0d15d3d39442b1f1f340c231f6a8d5c64835e897795dde WatchSource:0}: Error finding container 8de98c25946553f78d0d15d3d39442b1f1f340c231f6a8d5c64835e897795dde: Status 404 returned error can't find the container with id 8de98c25946553f78d0d15d3d39442b1f1f340c231f6a8d5c64835e897795dde Mar 13 12:38:19.469445 master-0 kubenswrapper[7518]: I0313 12:38:19.466590 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 13 12:38:19.469445 master-0 kubenswrapper[7518]: E0313 12:38:19.467008 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6166c97c-6620-4e9d-8a48-b4c7a4f16655" containerName="installer" Mar 13 12:38:19.469445 master-0 kubenswrapper[7518]: I0313 12:38:19.467023 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="6166c97c-6620-4e9d-8a48-b4c7a4f16655" containerName="installer" Mar 13 12:38:19.469445 master-0 kubenswrapper[7518]: I0313 12:38:19.467236 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="6166c97c-6620-4e9d-8a48-b4c7a4f16655" containerName="installer" Mar 13 12:38:19.469445 master-0 kubenswrapper[7518]: I0313 12:38:19.467831 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:38:19.477178 master-0 kubenswrapper[7518]: I0313 12:38:19.470256 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 12:38:19.477178 master-0 kubenswrapper[7518]: I0313 12:38:19.470725 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 13 12:38:19.580390 master-0 kubenswrapper[7518]: I0313 12:38:19.580345 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:38:19.580639 master-0 kubenswrapper[7518]: I0313 12:38:19.580449 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-var-lock\") pod \"installer-1-master-0\" (UID: \"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:38:19.580639 master-0 kubenswrapper[7518]: I0313 12:38:19.580498 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-kube-api-access\") pod \"installer-1-master-0\" (UID: \"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:38:19.607061 master-0 kubenswrapper[7518]: I0313 12:38:19.606848 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6166c97c-6620-4e9d-8a48-b4c7a4f16655" path="/var/lib/kubelet/pods/6166c97c-6620-4e9d-8a48-b4c7a4f16655/volumes" Mar 13 12:38:19.681232 master-0 kubenswrapper[7518]: I0313 12:38:19.681188 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:38:19.681437 master-0 kubenswrapper[7518]: I0313 12:38:19.681325 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:38:19.681529 master-0 kubenswrapper[7518]: I0313 12:38:19.681468 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-var-lock\") pod \"installer-1-master-0\" (UID: \"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:38:19.681583 master-0 kubenswrapper[7518]: I0313 12:38:19.681565 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-var-lock\") pod \"installer-1-master-0\" (UID: \"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:38:19.681669 master-0 kubenswrapper[7518]: I0313 12:38:19.681630 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-kube-api-access\") pod \"installer-1-master-0\" (UID: \"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:38:19.708503 master-0 kubenswrapper[7518]: I0313 12:38:19.708434 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" event={"ID":"6606f89b-0e8d-4b65-8642-ff84d93df419","Type":"ContainerStarted","Data":"e8d36799f0c79108a16b192e4f6a4d554eca6d351e64db29850ca2175ca14614"} Mar 13 12:38:19.708871 master-0 kubenswrapper[7518]: I0313 12:38:19.708814 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:19.712503 master-0 kubenswrapper[7518]: I0313 12:38:19.712444 7518 generic.go:334] "Generic (PLEG): container finished" podID="2f48243b-6b05-4efa-8420-58a4419622bf" containerID="98d5a0f3b11d1d941da412c009ee3c69f4e8ca0aa4267f8ef0b2168cee85df9d" exitCode=0 Mar 13 12:38:19.712665 master-0 kubenswrapper[7518]: I0313 12:38:19.712614 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-844bc54c88-vznst" event={"ID":"2f48243b-6b05-4efa-8420-58a4419622bf","Type":"ContainerDied","Data":"98d5a0f3b11d1d941da412c009ee3c69f4e8ca0aa4267f8ef0b2168cee85df9d"} Mar 13 12:38:19.712716 master-0 kubenswrapper[7518]: I0313 12:38:19.712678 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:19.713929 master-0 kubenswrapper[7518]: I0313 12:38:19.713905 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" event={"ID":"c4477be6-bcff-407a-8033-b005e19bf5d6","Type":"ContainerStarted","Data":"1a76bbfe309fede6616b36594caf492bb37131c61ced34f22958a33ab95544a9"} Mar 13 12:38:19.715921 master-0 kubenswrapper[7518]: I0313 12:38:19.715889 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"bfabb495-1707-4c3d-b00e-2f3b2976fb92","Type":"ContainerStarted","Data":"d8cf37e4c8a527d04eff5203f40779f993e328715e0f8f8ef7b2ff90bad966cf"} Mar 13 12:38:19.715992 master-0 kubenswrapper[7518]: I0313 12:38:19.715924 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"bfabb495-1707-4c3d-b00e-2f3b2976fb92","Type":"ContainerStarted","Data":"8de98c25946553f78d0d15d3d39442b1f1f340c231f6a8d5c64835e897795dde"} Mar 13 12:38:19.760458 master-0 kubenswrapper[7518]: I0313 12:38:19.759256 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-kube-api-access\") pod \"installer-1-master-0\" (UID: \"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:38:19.761104 master-0 kubenswrapper[7518]: I0313 12:38:19.761048 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" podStartSLOduration=14.295967988 podStartE2EDuration="19.761031536s" podCreationTimestamp="2026-03-13 12:38:00 +0000 UTC" firstStartedPulling="2026-03-13 12:38:13.26466103 +0000 UTC m=+47.897730217" lastFinishedPulling="2026-03-13 12:38:18.729724578 +0000 UTC m=+53.362793765" observedRunningTime="2026-03-13 12:38:19.759524981 +0000 UTC m=+54.392594188" watchObservedRunningTime="2026-03-13 12:38:19.761031536 +0000 UTC m=+54.394100713" Mar 13 12:38:19.800832 master-0 kubenswrapper[7518]: I0313 12:38:19.800787 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:38:19.962954 master-0 kubenswrapper[7518]: I0313 12:38:19.962856 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" podStartSLOduration=14.962832901 podStartE2EDuration="14.962832901s" podCreationTimestamp="2026-03-13 12:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:19.907809088 +0000 UTC m=+54.540878265" watchObservedRunningTime="2026-03-13 12:38:19.962832901 +0000 UTC m=+54.595902088" Mar 13 12:38:20.384393 master-0 kubenswrapper[7518]: I0313 12:38:20.384325 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=3.384303253 podStartE2EDuration="3.384303253s" podCreationTimestamp="2026-03-13 12:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:20.334536181 +0000 UTC m=+54.967605368" watchObservedRunningTime="2026-03-13 12:38:20.384303253 +0000 UTC m=+55.017372450" Mar 13 12:38:20.387377 master-0 kubenswrapper[7518]: I0313 12:38:20.386024 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 13 12:38:20.510608 master-0 kubenswrapper[7518]: I0313 12:38:20.510556 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f4cd854d4-4p7j6"] Mar 13 12:38:20.516020 master-0 kubenswrapper[7518]: I0313 12:38:20.515906 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:20.516491 master-0 kubenswrapper[7518]: I0313 12:38:20.516465 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:20.528332 master-0 kubenswrapper[7518]: I0313 12:38:20.528101 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:20.559251 master-0 kubenswrapper[7518]: I0313 12:38:20.556881 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f"] Mar 13 12:38:20.559251 master-0 kubenswrapper[7518]: I0313 12:38:20.557113 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" podUID="654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e" containerName="route-controller-manager" containerID="cri-o://7bc2f5971ed0e7d425c40799b23b57efccbd201a6b95beb9c1e2d82560d76c5e" gracePeriod=30 Mar 13 12:38:20.742170 master-0 kubenswrapper[7518]: I0313 12:38:20.740453 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-844bc54c88-vznst" event={"ID":"2f48243b-6b05-4efa-8420-58a4419622bf","Type":"ContainerStarted","Data":"bf52f324bcbbbafaab6ac42d9cb6796a983d00914eda52e908260a7381b17256"} Mar 13 12:38:20.742170 master-0 kubenswrapper[7518]: I0313 12:38:20.740514 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-844bc54c88-vznst" event={"ID":"2f48243b-6b05-4efa-8420-58a4419622bf","Type":"ContainerStarted","Data":"77cc3560fd5ccb6bccdf2197cedb6e97b9eff955661f0f2f09092219522cf119"} Mar 13 12:38:20.748713 master-0 kubenswrapper[7518]: I0313 12:38:20.747818 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:38:20.766160 master-0 kubenswrapper[7518]: I0313 12:38:20.764782 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-844bc54c88-vznst" podStartSLOduration=13.401418674 podStartE2EDuration="18.764760481s" podCreationTimestamp="2026-03-13 12:38:02 +0000 UTC" firstStartedPulling="2026-03-13 12:38:13.348416347 +0000 UTC m=+47.981485534" lastFinishedPulling="2026-03-13 12:38:18.711758154 +0000 UTC m=+53.344827341" observedRunningTime="2026-03-13 12:38:20.76447488 +0000 UTC m=+55.397544057" watchObservedRunningTime="2026-03-13 12:38:20.764760481 +0000 UTC m=+55.397829668" Mar 13 12:38:20.838843 master-0 kubenswrapper[7518]: I0313 12:38:20.838796 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:20.840119 master-0 kubenswrapper[7518]: I0313 12:38:20.840068 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:21.050169 master-0 kubenswrapper[7518]: I0313 12:38:21.048133 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 12:38:21.050169 master-0 kubenswrapper[7518]: I0313 12:38:21.049641 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:38:21.056383 master-0 kubenswrapper[7518]: I0313 12:38:21.051600 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 12:38:21.111657 master-0 kubenswrapper[7518]: I0313 12:38:21.111348 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3828446d-a3e3-412f-a0e7-7347b5de523a-kube-api-access\") pod \"installer-1-master-0\" (UID: \"3828446d-a3e3-412f-a0e7-7347b5de523a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:38:21.111657 master-0 kubenswrapper[7518]: I0313 12:38:21.111487 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3828446d-a3e3-412f-a0e7-7347b5de523a-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"3828446d-a3e3-412f-a0e7-7347b5de523a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:38:21.111657 master-0 kubenswrapper[7518]: I0313 12:38:21.111598 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3828446d-a3e3-412f-a0e7-7347b5de523a-var-lock\") pod \"installer-1-master-0\" (UID: \"3828446d-a3e3-412f-a0e7-7347b5de523a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:38:21.117150 master-0 kubenswrapper[7518]: I0313 12:38:21.117080 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: I0313 12:38:21.163714 7518 patch_prober.go:28] interesting pod/apiserver-844bc54c88-vznst container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: [+]log ok Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: [+]etcd ok Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: [+]poststarthook/max-in-flight-filter ok Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: [+]poststarthook/openshift.io-startinformers ok Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: livez check failed Mar 13 12:38:21.171165 master-0 kubenswrapper[7518]: I0313 12:38:21.163780 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-844bc54c88-vznst" podUID="2f48243b-6b05-4efa-8420-58a4419622bf" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:38:21.213537 master-0 kubenswrapper[7518]: I0313 12:38:21.213462 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3828446d-a3e3-412f-a0e7-7347b5de523a-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"3828446d-a3e3-412f-a0e7-7347b5de523a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:38:21.213708 master-0 kubenswrapper[7518]: I0313 12:38:21.213578 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3828446d-a3e3-412f-a0e7-7347b5de523a-var-lock\") pod \"installer-1-master-0\" (UID: \"3828446d-a3e3-412f-a0e7-7347b5de523a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:38:21.213708 master-0 kubenswrapper[7518]: I0313 12:38:21.213667 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3828446d-a3e3-412f-a0e7-7347b5de523a-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"3828446d-a3e3-412f-a0e7-7347b5de523a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:38:21.213768 master-0 kubenswrapper[7518]: I0313 12:38:21.213755 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3828446d-a3e3-412f-a0e7-7347b5de523a-kube-api-access\") pod \"installer-1-master-0\" (UID: \"3828446d-a3e3-412f-a0e7-7347b5de523a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:38:21.214354 master-0 kubenswrapper[7518]: I0313 12:38:21.214301 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3828446d-a3e3-412f-a0e7-7347b5de523a-var-lock\") pod \"installer-1-master-0\" (UID: \"3828446d-a3e3-412f-a0e7-7347b5de523a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:38:21.232992 master-0 kubenswrapper[7518]: I0313 12:38:21.232780 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3828446d-a3e3-412f-a0e7-7347b5de523a-kube-api-access\") pod \"installer-1-master-0\" (UID: \"3828446d-a3e3-412f-a0e7-7347b5de523a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:38:21.373573 master-0 kubenswrapper[7518]: I0313 12:38:21.373379 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:38:21.746883 master-0 kubenswrapper[7518]: I0313 12:38:21.746829 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc","Type":"ContainerStarted","Data":"cbb3fd1b1972cab7aabe9a34a316fc6619100acdef1d341abf069e3ac4eab0ff"} Mar 13 12:38:21.747967 master-0 kubenswrapper[7518]: I0313 12:38:21.747932 7518 generic.go:334] "Generic (PLEG): container finished" podID="654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e" containerID="7bc2f5971ed0e7d425c40799b23b57efccbd201a6b95beb9c1e2d82560d76c5e" exitCode=0 Mar 13 12:38:21.748648 master-0 kubenswrapper[7518]: I0313 12:38:21.748510 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" event={"ID":"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e","Type":"ContainerDied","Data":"7bc2f5971ed0e7d425c40799b23b57efccbd201a6b95beb9c1e2d82560d76c5e"} Mar 13 12:38:21.748793 master-0 kubenswrapper[7518]: I0313 12:38:21.748664 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" podUID="6606f89b-0e8d-4b65-8642-ff84d93df419" containerName="controller-manager" containerID="cri-o://e8d36799f0c79108a16b192e4f6a4d554eca6d351e64db29850ca2175ca14614" gracePeriod=30 Mar 13 12:38:22.069204 master-0 kubenswrapper[7518]: I0313 12:38:22.067637 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 12:38:22.344214 master-0 kubenswrapper[7518]: I0313 12:38:22.343804 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:22.399332 master-0 kubenswrapper[7518]: I0313 12:38:22.398661 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5"] Mar 13 12:38:22.399332 master-0 kubenswrapper[7518]: E0313 12:38:22.398893 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e" containerName="route-controller-manager" Mar 13 12:38:22.399332 master-0 kubenswrapper[7518]: I0313 12:38:22.398914 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e" containerName="route-controller-manager" Mar 13 12:38:22.399332 master-0 kubenswrapper[7518]: I0313 12:38:22.399024 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e" containerName="route-controller-manager" Mar 13 12:38:22.403637 master-0 kubenswrapper[7518]: I0313 12:38:22.403608 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:22.602494 master-0 kubenswrapper[7518]: I0313 12:38:22.595906 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:22.604827 master-0 kubenswrapper[7518]: I0313 12:38:22.604769 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5"] Mar 13 12:38:22.698791 master-0 kubenswrapper[7518]: I0313 12:38:22.698756 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-client-ca\") pod \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " Mar 13 12:38:22.699078 master-0 kubenswrapper[7518]: I0313 12:38:22.699037 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-config\") pod \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " Mar 13 12:38:22.699148 master-0 kubenswrapper[7518]: I0313 12:38:22.699076 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62zx7\" (UniqueName: \"kubernetes.io/projected/6606f89b-0e8d-4b65-8642-ff84d93df419-kube-api-access-62zx7\") pod \"6606f89b-0e8d-4b65-8642-ff84d93df419\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " Mar 13 12:38:22.699148 master-0 kubenswrapper[7518]: I0313 12:38:22.699099 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-config\") pod \"6606f89b-0e8d-4b65-8642-ff84d93df419\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " Mar 13 12:38:22.699219 master-0 kubenswrapper[7518]: I0313 12:38:22.699146 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-serving-cert\") pod \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " Mar 13 12:38:22.699219 master-0 kubenswrapper[7518]: I0313 12:38:22.699173 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6606f89b-0e8d-4b65-8642-ff84d93df419-serving-cert\") pod \"6606f89b-0e8d-4b65-8642-ff84d93df419\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " Mar 13 12:38:22.699219 master-0 kubenswrapper[7518]: I0313 12:38:22.699192 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-proxy-ca-bundles\") pod \"6606f89b-0e8d-4b65-8642-ff84d93df419\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " Mar 13 12:38:22.699382 master-0 kubenswrapper[7518]: I0313 12:38:22.699358 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-client-ca\") pod \"6606f89b-0e8d-4b65-8642-ff84d93df419\" (UID: \"6606f89b-0e8d-4b65-8642-ff84d93df419\") " Mar 13 12:38:22.699433 master-0 kubenswrapper[7518]: I0313 12:38:22.699385 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sbfp\" (UniqueName: \"kubernetes.io/projected/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-kube-api-access-5sbfp\") pod \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\" (UID: \"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e\") " Mar 13 12:38:22.699512 master-0 kubenswrapper[7518]: I0313 12:38:22.699481 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-serving-cert\") pod \"route-controller-manager-784b8dc7f8-4czh5\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:22.699565 master-0 kubenswrapper[7518]: I0313 12:38:22.699515 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5zs6\" (UniqueName: \"kubernetes.io/projected/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-kube-api-access-t5zs6\") pod \"route-controller-manager-784b8dc7f8-4czh5\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:22.699565 master-0 kubenswrapper[7518]: I0313 12:38:22.699550 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-config\") pod \"route-controller-manager-784b8dc7f8-4czh5\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:22.699674 master-0 kubenswrapper[7518]: I0313 12:38:22.699569 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-client-ca\") pod \"route-controller-manager-784b8dc7f8-4czh5\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:22.699674 master-0 kubenswrapper[7518]: I0313 12:38:22.699593 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-client-ca" (OuterVolumeSpecName: "client-ca") pod "654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e" (UID: "654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:22.700032 master-0 kubenswrapper[7518]: I0313 12:38:22.700012 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6606f89b-0e8d-4b65-8642-ff84d93df419" (UID: "6606f89b-0e8d-4b65-8642-ff84d93df419"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:22.700121 master-0 kubenswrapper[7518]: I0313 12:38:22.700097 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-client-ca" (OuterVolumeSpecName: "client-ca") pod "6606f89b-0e8d-4b65-8642-ff84d93df419" (UID: "6606f89b-0e8d-4b65-8642-ff84d93df419"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:22.700423 master-0 kubenswrapper[7518]: I0313 12:38:22.700400 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-config" (OuterVolumeSpecName: "config") pod "654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e" (UID: "654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:22.709809 master-0 kubenswrapper[7518]: I0313 12:38:22.701971 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-config" (OuterVolumeSpecName: "config") pod "6606f89b-0e8d-4b65-8642-ff84d93df419" (UID: "6606f89b-0e8d-4b65-8642-ff84d93df419"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:22.709809 master-0 kubenswrapper[7518]: I0313 12:38:22.704176 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6606f89b-0e8d-4b65-8642-ff84d93df419-kube-api-access-62zx7" (OuterVolumeSpecName: "kube-api-access-62zx7") pod "6606f89b-0e8d-4b65-8642-ff84d93df419" (UID: "6606f89b-0e8d-4b65-8642-ff84d93df419"). InnerVolumeSpecName "kube-api-access-62zx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:38:22.709809 master-0 kubenswrapper[7518]: I0313 12:38:22.705188 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-kube-api-access-5sbfp" (OuterVolumeSpecName: "kube-api-access-5sbfp") pod "654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e" (UID: "654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e"). InnerVolumeSpecName "kube-api-access-5sbfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:38:22.713633 master-0 kubenswrapper[7518]: I0313 12:38:22.710183 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e" (UID: "654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:38:22.713633 master-0 kubenswrapper[7518]: I0313 12:38:22.712621 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6606f89b-0e8d-4b65-8642-ff84d93df419-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6606f89b-0e8d-4b65-8642-ff84d93df419" (UID: "6606f89b-0e8d-4b65-8642-ff84d93df419"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:38:22.767458 master-0 kubenswrapper[7518]: I0313 12:38:22.767382 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc","Type":"ContainerStarted","Data":"f528e329070374fe2c7b4c96e9e572f6132a46e0533c48dae8a60425fcb61903"} Mar 13 12:38:22.771495 master-0 kubenswrapper[7518]: I0313 12:38:22.771239 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m7k6m" event={"ID":"ef42b65e-2d92-46ac-baaf-30e213787781","Type":"ContainerStarted","Data":"681ed74e510e131396c305ef9f6d9943e5a9960698421f969dfb518ea05cb31d"} Mar 13 12:38:22.775116 master-0 kubenswrapper[7518]: I0313 12:38:22.774967 7518 generic.go:334] "Generic (PLEG): container finished" podID="6606f89b-0e8d-4b65-8642-ff84d93df419" containerID="e8d36799f0c79108a16b192e4f6a4d554eca6d351e64db29850ca2175ca14614" exitCode=0 Mar 13 12:38:22.775116 master-0 kubenswrapper[7518]: I0313 12:38:22.775025 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" event={"ID":"6606f89b-0e8d-4b65-8642-ff84d93df419","Type":"ContainerDied","Data":"e8d36799f0c79108a16b192e4f6a4d554eca6d351e64db29850ca2175ca14614"} Mar 13 12:38:22.775116 master-0 kubenswrapper[7518]: I0313 12:38:22.775051 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" event={"ID":"6606f89b-0e8d-4b65-8642-ff84d93df419","Type":"ContainerDied","Data":"d9a8b9c70043384d15c1e55e57952d35350012b93752bf32b8085c7fe3d04b51"} Mar 13 12:38:22.775116 master-0 kubenswrapper[7518]: I0313 12:38:22.775087 7518 scope.go:117] "RemoveContainer" containerID="e8d36799f0c79108a16b192e4f6a4d554eca6d351e64db29850ca2175ca14614" Mar 13 12:38:22.775586 master-0 kubenswrapper[7518]: I0313 12:38:22.775204 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4cd854d4-4p7j6" Mar 13 12:38:22.792184 master-0 kubenswrapper[7518]: I0313 12:38:22.792073 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=3.792044148 podStartE2EDuration="3.792044148s" podCreationTimestamp="2026-03-13 12:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:22.789950562 +0000 UTC m=+57.423019749" watchObservedRunningTime="2026-03-13 12:38:22.792044148 +0000 UTC m=+57.425113335" Mar 13 12:38:22.797427 master-0 kubenswrapper[7518]: I0313 12:38:22.795652 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"3828446d-a3e3-412f-a0e7-7347b5de523a","Type":"ContainerStarted","Data":"504639ecf4788ce4c267fd64fb378348d1c51285c4c07623bf66e15e61133a68"} Mar 13 12:38:22.797937 master-0 kubenswrapper[7518]: I0313 12:38:22.797881 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" event={"ID":"654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e","Type":"ContainerDied","Data":"52319bb4328d419dddcfd98978ce03c54f6355e317c6c887e05cbba64bedbf53"} Mar 13 12:38:22.798506 master-0 kubenswrapper[7518]: I0313 12:38:22.798473 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f" Mar 13 12:38:22.803159 master-0 kubenswrapper[7518]: I0313 12:38:22.803087 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-client-ca\") pod \"route-controller-manager-784b8dc7f8-4czh5\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:22.803552 master-0 kubenswrapper[7518]: I0313 12:38:22.803529 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-serving-cert\") pod \"route-controller-manager-784b8dc7f8-4czh5\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:22.803613 master-0 kubenswrapper[7518]: I0313 12:38:22.803590 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5zs6\" (UniqueName: \"kubernetes.io/projected/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-kube-api-access-t5zs6\") pod \"route-controller-manager-784b8dc7f8-4czh5\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:22.803784 master-0 kubenswrapper[7518]: I0313 12:38:22.803754 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-config\") pod \"route-controller-manager-784b8dc7f8-4czh5\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:22.804399 master-0 kubenswrapper[7518]: I0313 12:38:22.804378 7518 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.804497 master-0 kubenswrapper[7518]: I0313 12:38:22.804408 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62zx7\" (UniqueName: \"kubernetes.io/projected/6606f89b-0e8d-4b65-8642-ff84d93df419-kube-api-access-62zx7\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.804909 master-0 kubenswrapper[7518]: I0313 12:38:22.804691 7518 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.804909 master-0 kubenswrapper[7518]: I0313 12:38:22.804906 7518 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.805008 master-0 kubenswrapper[7518]: I0313 12:38:22.804921 7518 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6606f89b-0e8d-4b65-8642-ff84d93df419-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.805008 master-0 kubenswrapper[7518]: I0313 12:38:22.804945 7518 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.805008 master-0 kubenswrapper[7518]: I0313 12:38:22.804963 7518 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6606f89b-0e8d-4b65-8642-ff84d93df419-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.805008 master-0 kubenswrapper[7518]: I0313 12:38:22.804976 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sbfp\" (UniqueName: \"kubernetes.io/projected/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-kube-api-access-5sbfp\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.805008 master-0 kubenswrapper[7518]: I0313 12:38:22.804989 7518 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.840501 master-0 kubenswrapper[7518]: I0313 12:38:22.824474 7518 scope.go:117] "RemoveContainer" containerID="e8d36799f0c79108a16b192e4f6a4d554eca6d351e64db29850ca2175ca14614" Mar 13 12:38:22.840501 master-0 kubenswrapper[7518]: I0313 12:38:22.827302 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-client-ca\") pod \"route-controller-manager-784b8dc7f8-4czh5\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:22.840501 master-0 kubenswrapper[7518]: I0313 12:38:22.827820 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-config\") pod \"route-controller-manager-784b8dc7f8-4czh5\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:22.840501 master-0 kubenswrapper[7518]: I0313 12:38:22.829337 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-serving-cert\") pod \"route-controller-manager-784b8dc7f8-4czh5\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:22.840501 master-0 kubenswrapper[7518]: E0313 12:38:22.839055 7518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8d36799f0c79108a16b192e4f6a4d554eca6d351e64db29850ca2175ca14614\": container with ID starting with e8d36799f0c79108a16b192e4f6a4d554eca6d351e64db29850ca2175ca14614 not found: ID does not exist" containerID="e8d36799f0c79108a16b192e4f6a4d554eca6d351e64db29850ca2175ca14614" Mar 13 12:38:22.840501 master-0 kubenswrapper[7518]: I0313 12:38:22.839182 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8d36799f0c79108a16b192e4f6a4d554eca6d351e64db29850ca2175ca14614"} err="failed to get container status \"e8d36799f0c79108a16b192e4f6a4d554eca6d351e64db29850ca2175ca14614\": rpc error: code = NotFound desc = could not find container \"e8d36799f0c79108a16b192e4f6a4d554eca6d351e64db29850ca2175ca14614\": container with ID starting with e8d36799f0c79108a16b192e4f6a4d554eca6d351e64db29850ca2175ca14614 not found: ID does not exist" Mar 13 12:38:22.840501 master-0 kubenswrapper[7518]: I0313 12:38:22.839218 7518 scope.go:117] "RemoveContainer" containerID="7bc2f5971ed0e7d425c40799b23b57efccbd201a6b95beb9c1e2d82560d76c5e" Mar 13 12:38:22.857755 master-0 kubenswrapper[7518]: I0313 12:38:22.857494 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5zs6\" (UniqueName: \"kubernetes.io/projected/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-kube-api-access-t5zs6\") pod \"route-controller-manager-784b8dc7f8-4czh5\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:22.859240 master-0 kubenswrapper[7518]: I0313 12:38:22.858973 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f4cd854d4-4p7j6"] Mar 13 12:38:22.865414 master-0 kubenswrapper[7518]: I0313 12:38:22.865364 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f4cd854d4-4p7j6"] Mar 13 12:38:22.900165 master-0 kubenswrapper[7518]: I0313 12:38:22.900072 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f"] Mar 13 12:38:22.910437 master-0 kubenswrapper[7518]: I0313 12:38:22.910385 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b799f66dc-95b9f"] Mar 13 12:38:22.914231 master-0 kubenswrapper[7518]: I0313 12:38:22.914186 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:23.573409 master-0 kubenswrapper[7518]: I0313 12:38:23.572906 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5"] Mar 13 12:38:23.607987 master-0 kubenswrapper[7518]: I0313 12:38:23.607055 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e" path="/var/lib/kubelet/pods/654bcdbb-82a6-4927-acb4-cd6f1d6ccc9e/volumes" Mar 13 12:38:23.607987 master-0 kubenswrapper[7518]: I0313 12:38:23.607892 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6606f89b-0e8d-4b65-8642-ff84d93df419" path="/var/lib/kubelet/pods/6606f89b-0e8d-4b65-8642-ff84d93df419/volumes" Mar 13 12:38:23.803682 master-0 kubenswrapper[7518]: I0313 12:38:23.803632 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" event={"ID":"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd","Type":"ContainerStarted","Data":"235ac172fba643a3622b4550bc85bdd02f44fcf97f67a04413c514187d6799f5"} Mar 13 12:38:23.804899 master-0 kubenswrapper[7518]: I0313 12:38:23.804854 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m7k6m" event={"ID":"ef42b65e-2d92-46ac-baaf-30e213787781","Type":"ContainerStarted","Data":"3d7efe2ba2d011246f9bc9dbf76ac20b28ccd8d17640993d5908d371a8ebfe74"} Mar 13 12:38:23.805758 master-0 kubenswrapper[7518]: I0313 12:38:23.805285 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-m7k6m" Mar 13 12:38:23.814162 master-0 kubenswrapper[7518]: I0313 12:38:23.814101 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"3828446d-a3e3-412f-a0e7-7347b5de523a","Type":"ContainerStarted","Data":"aa6c714b8707274c998afed0944ecc8600d7bc24a6b08415b6cbef112b436b47"} Mar 13 12:38:23.833225 master-0 kubenswrapper[7518]: I0313 12:38:23.831014 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-m7k6m" podStartSLOduration=5.684621042 podStartE2EDuration="8.830990973s" podCreationTimestamp="2026-03-13 12:38:15 +0000 UTC" firstStartedPulling="2026-03-13 12:38:18.668979618 +0000 UTC m=+53.302048795" lastFinishedPulling="2026-03-13 12:38:21.815349529 +0000 UTC m=+56.448418726" observedRunningTime="2026-03-13 12:38:23.830427623 +0000 UTC m=+58.463496820" watchObservedRunningTime="2026-03-13 12:38:23.830990973 +0000 UTC m=+58.464060160" Mar 13 12:38:23.870582 master-0 kubenswrapper[7518]: I0313 12:38:23.870477 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=2.87045893 podStartE2EDuration="2.87045893s" podCreationTimestamp="2026-03-13 12:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:23.869470554 +0000 UTC m=+58.502539741" watchObservedRunningTime="2026-03-13 12:38:23.87045893 +0000 UTC m=+58.503528137" Mar 13 12:38:24.660692 master-0 kubenswrapper[7518]: I0313 12:38:24.659678 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8b998ff89-g8rgp"] Mar 13 12:38:24.660692 master-0 kubenswrapper[7518]: E0313 12:38:24.659865 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6606f89b-0e8d-4b65-8642-ff84d93df419" containerName="controller-manager" Mar 13 12:38:24.660692 master-0 kubenswrapper[7518]: I0313 12:38:24.659879 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="6606f89b-0e8d-4b65-8642-ff84d93df419" containerName="controller-manager" Mar 13 12:38:24.660692 master-0 kubenswrapper[7518]: I0313 12:38:24.659980 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="6606f89b-0e8d-4b65-8642-ff84d93df419" containerName="controller-manager" Mar 13 12:38:24.660692 master-0 kubenswrapper[7518]: I0313 12:38:24.660348 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.664186 master-0 kubenswrapper[7518]: I0313 12:38:24.664122 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:38:24.666342 master-0 kubenswrapper[7518]: I0313 12:38:24.665881 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:38:24.671190 master-0 kubenswrapper[7518]: I0313 12:38:24.671055 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:38:24.671190 master-0 kubenswrapper[7518]: I0313 12:38:24.671095 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:38:24.677631 master-0 kubenswrapper[7518]: I0313 12:38:24.677446 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:38:24.678643 master-0 kubenswrapper[7518]: I0313 12:38:24.677838 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:38:24.686378 master-0 kubenswrapper[7518]: I0313 12:38:24.686316 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b998ff89-g8rgp"] Mar 13 12:38:24.824695 master-0 kubenswrapper[7518]: I0313 12:38:24.824646 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" event={"ID":"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd","Type":"ContainerStarted","Data":"1bcda9a5c5fb5278bd2cbc52137bb8493c2a066cfbf0a4cfef7a0c96ad56b754"} Mar 13 12:38:24.848029 master-0 kubenswrapper[7518]: I0313 12:38:24.847941 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-config\") pod \"controller-manager-8b998ff89-g8rgp\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.848328 master-0 kubenswrapper[7518]: I0313 12:38:24.848047 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-client-ca\") pod \"controller-manager-8b998ff89-g8rgp\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.848328 master-0 kubenswrapper[7518]: I0313 12:38:24.848076 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-serving-cert\") pod \"controller-manager-8b998ff89-g8rgp\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.848632 master-0 kubenswrapper[7518]: I0313 12:38:24.848588 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgzfd\" (UniqueName: \"kubernetes.io/projected/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-kube-api-access-tgzfd\") pod \"controller-manager-8b998ff89-g8rgp\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.848706 master-0 kubenswrapper[7518]: I0313 12:38:24.848639 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-proxy-ca-bundles\") pod \"controller-manager-8b998ff89-g8rgp\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.860463 master-0 kubenswrapper[7518]: I0313 12:38:24.860391 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" podStartSLOduration=4.860374821 podStartE2EDuration="4.860374821s" podCreationTimestamp="2026-03-13 12:38:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:24.859512919 +0000 UTC m=+59.492582126" watchObservedRunningTime="2026-03-13 12:38:24.860374821 +0000 UTC m=+59.493444008" Mar 13 12:38:24.949430 master-0 kubenswrapper[7518]: I0313 12:38:24.949302 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-config\") pod \"controller-manager-8b998ff89-g8rgp\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.949430 master-0 kubenswrapper[7518]: I0313 12:38:24.949390 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-client-ca\") pod \"controller-manager-8b998ff89-g8rgp\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.949430 master-0 kubenswrapper[7518]: I0313 12:38:24.949428 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-serving-cert\") pod \"controller-manager-8b998ff89-g8rgp\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.949679 master-0 kubenswrapper[7518]: I0313 12:38:24.949468 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgzfd\" (UniqueName: \"kubernetes.io/projected/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-kube-api-access-tgzfd\") pod \"controller-manager-8b998ff89-g8rgp\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.949986 master-0 kubenswrapper[7518]: I0313 12:38:24.949933 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-proxy-ca-bundles\") pod \"controller-manager-8b998ff89-g8rgp\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.950450 master-0 kubenswrapper[7518]: I0313 12:38:24.950421 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-client-ca\") pod \"controller-manager-8b998ff89-g8rgp\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.950858 master-0 kubenswrapper[7518]: I0313 12:38:24.950830 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-config\") pod \"controller-manager-8b998ff89-g8rgp\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.951380 master-0 kubenswrapper[7518]: I0313 12:38:24.951346 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-proxy-ca-bundles\") pod \"controller-manager-8b998ff89-g8rgp\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.953207 master-0 kubenswrapper[7518]: I0313 12:38:24.953185 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-serving-cert\") pod \"controller-manager-8b998ff89-g8rgp\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.965400 master-0 kubenswrapper[7518]: I0313 12:38:24.965374 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgzfd\" (UniqueName: \"kubernetes.io/projected/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-kube-api-access-tgzfd\") pod \"controller-manager-8b998ff89-g8rgp\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:24.986252 master-0 kubenswrapper[7518]: I0313 12:38:24.986194 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:25.389323 master-0 kubenswrapper[7518]: I0313 12:38:25.389284 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b998ff89-g8rgp"] Mar 13 12:38:25.832161 master-0 kubenswrapper[7518]: I0313 12:38:25.832057 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" event={"ID":"b21ecc52-8c8f-43de-84bb-13bd8eb305b6","Type":"ContainerStarted","Data":"5fed8a223c8bd85462011864e93e488b62bf27c2022fb6a3984d126a69212081"} Mar 13 12:38:25.832161 master-0 kubenswrapper[7518]: I0313 12:38:25.832120 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" event={"ID":"b21ecc52-8c8f-43de-84bb-13bd8eb305b6","Type":"ContainerStarted","Data":"bb145fda395c4e8c32ad5949ac0c69d58287605de6532ecc4ce10c6d8c224e53"} Mar 13 12:38:25.832666 master-0 kubenswrapper[7518]: I0313 12:38:25.832467 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:25.838955 master-0 kubenswrapper[7518]: I0313 12:38:25.838916 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:38:25.843701 master-0 kubenswrapper[7518]: I0313 12:38:25.843674 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:25.850306 master-0 kubenswrapper[7518]: I0313 12:38:25.850243 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:38:25.891064 master-0 kubenswrapper[7518]: I0313 12:38:25.890977 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" podStartSLOduration=5.8909556720000005 podStartE2EDuration="5.890955672s" podCreationTimestamp="2026-03-13 12:38:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:25.887243936 +0000 UTC m=+60.520313143" watchObservedRunningTime="2026-03-13 12:38:25.890955672 +0000 UTC m=+60.524024859" Mar 13 12:38:26.837613 master-0 kubenswrapper[7518]: I0313 12:38:26.837544 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:26.851173 master-0 kubenswrapper[7518]: I0313 12:38:26.851105 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:38:27.351007 master-0 kubenswrapper[7518]: I0313 12:38:27.350960 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt"] Mar 13 12:38:27.351271 master-0 kubenswrapper[7518]: I0313 12:38:27.351243 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" podUID="f39d7f76-0075-44c3-9101-eb2607cb176a" containerName="cluster-version-operator" containerID="cri-o://596641d0ab55b5854707a6848930e7fad02440d9e89e0be41c608e76df02736c" gracePeriod=130 Mar 13 12:38:27.970256 master-0 kubenswrapper[7518]: I0313 12:38:27.969646 7518 generic.go:334] "Generic (PLEG): container finished" podID="f39d7f76-0075-44c3-9101-eb2607cb176a" containerID="596641d0ab55b5854707a6848930e7fad02440d9e89e0be41c608e76df02736c" exitCode=0 Mar 13 12:38:27.970256 master-0 kubenswrapper[7518]: I0313 12:38:27.969972 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" event={"ID":"f39d7f76-0075-44c3-9101-eb2607cb176a","Type":"ContainerDied","Data":"596641d0ab55b5854707a6848930e7fad02440d9e89e0be41c608e76df02736c"} Mar 13 12:38:28.284846 master-0 kubenswrapper[7518]: I0313 12:38:28.284801 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:38:28.336220 master-0 kubenswrapper[7518]: I0313 12:38:28.336169 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-ssl-certs\") pod \"f39d7f76-0075-44c3-9101-eb2607cb176a\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " Mar 13 12:38:28.336440 master-0 kubenswrapper[7518]: I0313 12:38:28.336238 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") pod \"f39d7f76-0075-44c3-9101-eb2607cb176a\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " Mar 13 12:38:28.336440 master-0 kubenswrapper[7518]: I0313 12:38:28.336271 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f39d7f76-0075-44c3-9101-eb2607cb176a-kube-api-access\") pod \"f39d7f76-0075-44c3-9101-eb2607cb176a\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " Mar 13 12:38:28.336440 master-0 kubenswrapper[7518]: I0313 12:38:28.336273 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "f39d7f76-0075-44c3-9101-eb2607cb176a" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:28.336440 master-0 kubenswrapper[7518]: I0313 12:38:28.336307 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-cvo-updatepayloads\") pod \"f39d7f76-0075-44c3-9101-eb2607cb176a\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " Mar 13 12:38:28.336440 master-0 kubenswrapper[7518]: I0313 12:38:28.336370 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f39d7f76-0075-44c3-9101-eb2607cb176a-service-ca\") pod \"f39d7f76-0075-44c3-9101-eb2607cb176a\" (UID: \"f39d7f76-0075-44c3-9101-eb2607cb176a\") " Mar 13 12:38:28.336611 master-0 kubenswrapper[7518]: I0313 12:38:28.336592 7518 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:28.336827 master-0 kubenswrapper[7518]: I0313 12:38:28.336793 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f39d7f76-0075-44c3-9101-eb2607cb176a-service-ca" (OuterVolumeSpecName: "service-ca") pod "f39d7f76-0075-44c3-9101-eb2607cb176a" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:28.336884 master-0 kubenswrapper[7518]: I0313 12:38:28.336426 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "f39d7f76-0075-44c3-9101-eb2607cb176a" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:28.340516 master-0 kubenswrapper[7518]: I0313 12:38:28.340462 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f39d7f76-0075-44c3-9101-eb2607cb176a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f39d7f76-0075-44c3-9101-eb2607cb176a" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:38:28.340822 master-0 kubenswrapper[7518]: I0313 12:38:28.340780 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f39d7f76-0075-44c3-9101-eb2607cb176a" (UID: "f39d7f76-0075-44c3-9101-eb2607cb176a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:38:28.437761 master-0 kubenswrapper[7518]: I0313 12:38:28.437692 7518 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f39d7f76-0075-44c3-9101-eb2607cb176a-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:28.437761 master-0 kubenswrapper[7518]: I0313 12:38:28.437742 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f39d7f76-0075-44c3-9101-eb2607cb176a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:28.437761 master-0 kubenswrapper[7518]: I0313 12:38:28.437761 7518 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f39d7f76-0075-44c3-9101-eb2607cb176a-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:28.438121 master-0 kubenswrapper[7518]: I0313 12:38:28.437793 7518 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f39d7f76-0075-44c3-9101-eb2607cb176a-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:28.978566 master-0 kubenswrapper[7518]: I0313 12:38:28.978536 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" Mar 13 12:38:28.979210 master-0 kubenswrapper[7518]: I0313 12:38:28.979158 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt" event={"ID":"f39d7f76-0075-44c3-9101-eb2607cb176a","Type":"ContainerDied","Data":"230dca71af5081e9c7ea82712c0cc643f5676c9c599e1fa82c984048ce54a082"} Mar 13 12:38:28.979261 master-0 kubenswrapper[7518]: I0313 12:38:28.979236 7518 scope.go:117] "RemoveContainer" containerID="596641d0ab55b5854707a6848930e7fad02440d9e89e0be41c608e76df02736c" Mar 13 12:38:29.042885 master-0 kubenswrapper[7518]: I0313 12:38:29.042845 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt"] Mar 13 12:38:29.050747 master-0 kubenswrapper[7518]: I0313 12:38:29.050687 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-mbjxt"] Mar 13 12:38:29.108231 master-0 kubenswrapper[7518]: I0313 12:38:29.108189 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2"] Mar 13 12:38:29.108711 master-0 kubenswrapper[7518]: E0313 12:38:29.108689 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f39d7f76-0075-44c3-9101-eb2607cb176a" containerName="cluster-version-operator" Mar 13 12:38:29.108851 master-0 kubenswrapper[7518]: I0313 12:38:29.108837 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="f39d7f76-0075-44c3-9101-eb2607cb176a" containerName="cluster-version-operator" Mar 13 12:38:29.109036 master-0 kubenswrapper[7518]: I0313 12:38:29.109021 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="f39d7f76-0075-44c3-9101-eb2607cb176a" containerName="cluster-version-operator" Mar 13 12:38:29.109559 master-0 kubenswrapper[7518]: I0313 12:38:29.109541 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.112957 master-0 kubenswrapper[7518]: I0313 12:38:29.112924 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 12:38:29.113640 master-0 kubenswrapper[7518]: I0313 12:38:29.113116 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 12:38:29.116599 master-0 kubenswrapper[7518]: I0313 12:38:29.116558 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 12:38:29.250909 master-0 kubenswrapper[7518]: I0313 12:38:29.250802 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676b054a-e76f-425d-a6ff-3f1bea8b523e-serving-cert\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.251422 master-0 kubenswrapper[7518]: I0313 12:38:29.251399 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/676b054a-e76f-425d-a6ff-3f1bea8b523e-service-ca\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.251596 master-0 kubenswrapper[7518]: I0313 12:38:29.251569 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/676b054a-e76f-425d-a6ff-3f1bea8b523e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.251801 master-0 kubenswrapper[7518]: I0313 12:38:29.251728 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/676b054a-e76f-425d-a6ff-3f1bea8b523e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.251918 master-0 kubenswrapper[7518]: I0313 12:38:29.251901 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676b054a-e76f-425d-a6ff-3f1bea8b523e-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.353176 master-0 kubenswrapper[7518]: I0313 12:38:29.353089 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/676b054a-e76f-425d-a6ff-3f1bea8b523e-service-ca\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.354200 master-0 kubenswrapper[7518]: I0313 12:38:29.354176 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/676b054a-e76f-425d-a6ff-3f1bea8b523e-service-ca\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.354388 master-0 kubenswrapper[7518]: I0313 12:38:29.354321 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/676b054a-e76f-425d-a6ff-3f1bea8b523e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.354513 master-0 kubenswrapper[7518]: I0313 12:38:29.354498 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/676b054a-e76f-425d-a6ff-3f1bea8b523e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.354595 master-0 kubenswrapper[7518]: I0313 12:38:29.354348 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/676b054a-e76f-425d-a6ff-3f1bea8b523e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.354679 master-0 kubenswrapper[7518]: I0313 12:38:29.354655 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/676b054a-e76f-425d-a6ff-3f1bea8b523e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.354679 master-0 kubenswrapper[7518]: I0313 12:38:29.354662 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676b054a-e76f-425d-a6ff-3f1bea8b523e-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.355034 master-0 kubenswrapper[7518]: I0313 12:38:29.355017 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676b054a-e76f-425d-a6ff-3f1bea8b523e-serving-cert\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.359673 master-0 kubenswrapper[7518]: I0313 12:38:29.359648 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676b054a-e76f-425d-a6ff-3f1bea8b523e-serving-cert\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.382439 master-0 kubenswrapper[7518]: I0313 12:38:29.382403 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676b054a-e76f-425d-a6ff-3f1bea8b523e-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.428820 master-0 kubenswrapper[7518]: I0313 12:38:29.428760 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:38:29.445107 master-0 kubenswrapper[7518]: W0313 12:38:29.445060 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod676b054a_e76f_425d_a6ff_3f1bea8b523e.slice/crio-d841a86b661f54cc41ca6d7f060def7405c52e9adcc79d02bb6a1a6bb94e4f40 WatchSource:0}: Error finding container d841a86b661f54cc41ca6d7f060def7405c52e9adcc79d02bb6a1a6bb94e4f40: Status 404 returned error can't find the container with id d841a86b661f54cc41ca6d7f060def7405c52e9adcc79d02bb6a1a6bb94e4f40 Mar 13 12:38:29.608636 master-0 kubenswrapper[7518]: I0313 12:38:29.608575 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f39d7f76-0075-44c3-9101-eb2607cb176a" path="/var/lib/kubelet/pods/f39d7f76-0075-44c3-9101-eb2607cb176a/volumes" Mar 13 12:38:29.984211 master-0 kubenswrapper[7518]: I0313 12:38:29.984057 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" event={"ID":"676b054a-e76f-425d-a6ff-3f1bea8b523e","Type":"ContainerStarted","Data":"01758a85bcc236e4926066681b9aa0286d195458c1cddadcb630f791db70a4ff"} Mar 13 12:38:29.984211 master-0 kubenswrapper[7518]: I0313 12:38:29.984160 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" event={"ID":"676b054a-e76f-425d-a6ff-3f1bea8b523e","Type":"ContainerStarted","Data":"d841a86b661f54cc41ca6d7f060def7405c52e9adcc79d02bb6a1a6bb94e4f40"} Mar 13 12:38:30.157691 master-0 kubenswrapper[7518]: I0313 12:38:30.157521 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" podStartSLOduration=1.157503584 podStartE2EDuration="1.157503584s" podCreationTimestamp="2026-03-13 12:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:30.156614401 +0000 UTC m=+64.789683578" watchObservedRunningTime="2026-03-13 12:38:30.157503584 +0000 UTC m=+64.790572771" Mar 13 12:38:30.482829 master-0 kubenswrapper[7518]: I0313 12:38:30.482769 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_f951e49f-91f7-42d3-bc63-8117cff68d7a/installer/0.log" Mar 13 12:38:30.483051 master-0 kubenswrapper[7518]: I0313 12:38:30.482876 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:38:30.571679 master-0 kubenswrapper[7518]: I0313 12:38:30.571599 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:38:30.571679 master-0 kubenswrapper[7518]: I0313 12:38:30.571663 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:38:30.571902 master-0 kubenswrapper[7518]: I0313 12:38:30.571703 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:38:30.571902 master-0 kubenswrapper[7518]: I0313 12:38:30.571781 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:38:30.571902 master-0 kubenswrapper[7518]: I0313 12:38:30.571807 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:38:30.575120 master-0 kubenswrapper[7518]: I0313 12:38:30.575077 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:38:30.575205 master-0 kubenswrapper[7518]: I0313 12:38:30.575073 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:38:30.575445 master-0 kubenswrapper[7518]: I0313 12:38:30.575406 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:38:30.576315 master-0 kubenswrapper[7518]: I0313 12:38:30.576277 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:38:30.576456 master-0 kubenswrapper[7518]: I0313 12:38:30.576426 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:38:30.672576 master-0 kubenswrapper[7518]: I0313 12:38:30.672454 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f951e49f-91f7-42d3-bc63-8117cff68d7a-kube-api-access\") pod \"f951e49f-91f7-42d3-bc63-8117cff68d7a\" (UID: \"f951e49f-91f7-42d3-bc63-8117cff68d7a\") " Mar 13 12:38:30.672766 master-0 kubenswrapper[7518]: I0313 12:38:30.672584 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f951e49f-91f7-42d3-bc63-8117cff68d7a-kubelet-dir\") pod \"f951e49f-91f7-42d3-bc63-8117cff68d7a\" (UID: \"f951e49f-91f7-42d3-bc63-8117cff68d7a\") " Mar 13 12:38:30.672766 master-0 kubenswrapper[7518]: I0313 12:38:30.672663 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f951e49f-91f7-42d3-bc63-8117cff68d7a-var-lock\") pod \"f951e49f-91f7-42d3-bc63-8117cff68d7a\" (UID: \"f951e49f-91f7-42d3-bc63-8117cff68d7a\") " Mar 13 12:38:30.672766 master-0 kubenswrapper[7518]: I0313 12:38:30.672703 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f951e49f-91f7-42d3-bc63-8117cff68d7a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f951e49f-91f7-42d3-bc63-8117cff68d7a" (UID: "f951e49f-91f7-42d3-bc63-8117cff68d7a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:30.672922 master-0 kubenswrapper[7518]: I0313 12:38:30.672823 7518 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f951e49f-91f7-42d3-bc63-8117cff68d7a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:30.672922 master-0 kubenswrapper[7518]: I0313 12:38:30.672821 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f951e49f-91f7-42d3-bc63-8117cff68d7a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f951e49f-91f7-42d3-bc63-8117cff68d7a" (UID: "f951e49f-91f7-42d3-bc63-8117cff68d7a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:30.675442 master-0 kubenswrapper[7518]: I0313 12:38:30.675390 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f951e49f-91f7-42d3-bc63-8117cff68d7a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f951e49f-91f7-42d3-bc63-8117cff68d7a" (UID: "f951e49f-91f7-42d3-bc63-8117cff68d7a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:38:30.772967 master-0 kubenswrapper[7518]: I0313 12:38:30.772895 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:38:30.773840 master-0 kubenswrapper[7518]: I0313 12:38:30.773724 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f951e49f-91f7-42d3-bc63-8117cff68d7a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:30.773840 master-0 kubenswrapper[7518]: I0313 12:38:30.773775 7518 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f951e49f-91f7-42d3-bc63-8117cff68d7a-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:30.776684 master-0 kubenswrapper[7518]: I0313 12:38:30.776642 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:38:30.777366 master-0 kubenswrapper[7518]: I0313 12:38:30.777032 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:38:30.779620 master-0 kubenswrapper[7518]: I0313 12:38:30.779574 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:38:30.817628 master-0 kubenswrapper[7518]: I0313 12:38:30.817573 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:38:31.005021 master-0 kubenswrapper[7518]: I0313 12:38:31.004975 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_f951e49f-91f7-42d3-bc63-8117cff68d7a/installer/0.log" Mar 13 12:38:31.005516 master-0 kubenswrapper[7518]: I0313 12:38:31.005027 7518 generic.go:334] "Generic (PLEG): container finished" podID="f951e49f-91f7-42d3-bc63-8117cff68d7a" containerID="c7e76711c5edec7f8a2e0bbd4c766faceb828b179eb650bdec8d3d483da35ea8" exitCode=1 Mar 13 12:38:31.006409 master-0 kubenswrapper[7518]: I0313 12:38:31.005679 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:38:31.006409 master-0 kubenswrapper[7518]: I0313 12:38:31.005730 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"f951e49f-91f7-42d3-bc63-8117cff68d7a","Type":"ContainerDied","Data":"c7e76711c5edec7f8a2e0bbd4c766faceb828b179eb650bdec8d3d483da35ea8"} Mar 13 12:38:31.006409 master-0 kubenswrapper[7518]: I0313 12:38:31.005789 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"f951e49f-91f7-42d3-bc63-8117cff68d7a","Type":"ContainerDied","Data":"5fc697237b7f9115f1d02bf0edd32bcf0859ce8f5d08322c8dce418ded8e8e34"} Mar 13 12:38:31.006409 master-0 kubenswrapper[7518]: I0313 12:38:31.005812 7518 scope.go:117] "RemoveContainer" containerID="c7e76711c5edec7f8a2e0bbd4c766faceb828b179eb650bdec8d3d483da35ea8" Mar 13 12:38:31.037756 master-0 kubenswrapper[7518]: I0313 12:38:31.037673 7518 scope.go:117] "RemoveContainer" containerID="c7e76711c5edec7f8a2e0bbd4c766faceb828b179eb650bdec8d3d483da35ea8" Mar 13 12:38:31.040342 master-0 kubenswrapper[7518]: E0313 12:38:31.040299 7518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7e76711c5edec7f8a2e0bbd4c766faceb828b179eb650bdec8d3d483da35ea8\": container with ID starting with c7e76711c5edec7f8a2e0bbd4c766faceb828b179eb650bdec8d3d483da35ea8 not found: ID does not exist" containerID="c7e76711c5edec7f8a2e0bbd4c766faceb828b179eb650bdec8d3d483da35ea8" Mar 13 12:38:31.040407 master-0 kubenswrapper[7518]: I0313 12:38:31.040357 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7e76711c5edec7f8a2e0bbd4c766faceb828b179eb650bdec8d3d483da35ea8"} err="failed to get container status \"c7e76711c5edec7f8a2e0bbd4c766faceb828b179eb650bdec8d3d483da35ea8\": rpc error: code = NotFound desc = could not find container \"c7e76711c5edec7f8a2e0bbd4c766faceb828b179eb650bdec8d3d483da35ea8\": container with ID starting with c7e76711c5edec7f8a2e0bbd4c766faceb828b179eb650bdec8d3d483da35ea8 not found: ID does not exist" Mar 13 12:38:31.149095 master-0 kubenswrapper[7518]: I0313 12:38:31.149038 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 12:38:31.168768 master-0 kubenswrapper[7518]: I0313 12:38:31.168594 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 12:38:31.335221 master-0 kubenswrapper[7518]: I0313 12:38:31.334415 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd"] Mar 13 12:38:31.371222 master-0 kubenswrapper[7518]: I0313 12:38:31.371188 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9"] Mar 13 12:38:31.372052 master-0 kubenswrapper[7518]: I0313 12:38:31.371989 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-r9lmb"] Mar 13 12:38:31.374059 master-0 kubenswrapper[7518]: I0313 12:38:31.374038 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk"] Mar 13 12:38:31.480554 master-0 kubenswrapper[7518]: I0313 12:38:31.480222 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz"] Mar 13 12:38:31.610631 master-0 kubenswrapper[7518]: W0313 12:38:31.609720 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod604456a0_4997_43bc_87ef_283a002111fe.slice/crio-2e65250ae5f98234b34351e57ed90215912c9eb2d91f1f748ce0046b50854a52 WatchSource:0}: Error finding container 2e65250ae5f98234b34351e57ed90215912c9eb2d91f1f748ce0046b50854a52: Status 404 returned error can't find the container with id 2e65250ae5f98234b34351e57ed90215912c9eb2d91f1f748ce0046b50854a52 Mar 13 12:38:31.613906 master-0 kubenswrapper[7518]: I0313 12:38:31.613854 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f951e49f-91f7-42d3-bc63-8117cff68d7a" path="/var/lib/kubelet/pods/f951e49f-91f7-42d3-bc63-8117cff68d7a/volumes" Mar 13 12:38:32.021156 master-0 kubenswrapper[7518]: I0313 12:38:32.021093 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" event={"ID":"3d653e1a-5903-4a02-9357-df145f028c0d","Type":"ContainerStarted","Data":"44396582a6f16f5943699470e51e4eeaf72ae2a09dcc99ed6476f749daebf00b"} Mar 13 12:38:32.021590 master-0 kubenswrapper[7518]: I0313 12:38:32.021165 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" event={"ID":"3d653e1a-5903-4a02-9357-df145f028c0d","Type":"ContainerStarted","Data":"970eeb7c4ac93691f1016454e092dba89eb2fcc2d1e0d15b1982b71ff313707c"} Mar 13 12:38:32.022423 master-0 kubenswrapper[7518]: I0313 12:38:32.022398 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" event={"ID":"604456a0-4997-43bc-87ef-283a002111fe","Type":"ContainerStarted","Data":"2e65250ae5f98234b34351e57ed90215912c9eb2d91f1f748ce0046b50854a52"} Mar 13 12:38:32.023572 master-0 kubenswrapper[7518]: I0313 12:38:32.023354 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r9lmb" event={"ID":"29b6aa89-0416-4595-9deb-10b290521d86","Type":"ContainerStarted","Data":"37d33fead87bedc9ebd143b0294923b633e8d9e7d47a848ec4d50fbd02e27628"} Mar 13 12:38:32.024723 master-0 kubenswrapper[7518]: I0313 12:38:32.024682 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" event={"ID":"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3","Type":"ContainerStarted","Data":"69fae5f2ef7c0575f1ee9aa46fd22ae7b8ff711dadd59b1c832eda467b9991cd"} Mar 13 12:38:32.026858 master-0 kubenswrapper[7518]: I0313 12:38:32.026823 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" event={"ID":"d5a19b80-d488-46d3-a4a8-0b80361077e1","Type":"ContainerStarted","Data":"8bb2d1af6db83f391d6e2aae6571d80b39fa6657f68665d4c9aa939bfcdacfe3"} Mar 13 12:38:33.648039 master-0 kubenswrapper[7518]: I0313 12:38:33.647967 7518 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 12:38:33.648632 master-0 kubenswrapper[7518]: I0313 12:38:33.648288 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" containerID="cri-o://e408fc0e8cb4ee12255385245e6376d6aaefa9c98b225370a726fb0b9f89662c" gracePeriod=30 Mar 13 12:38:33.648632 master-0 kubenswrapper[7518]: I0313 12:38:33.648463 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" containerID="cri-o://c8e034500e686ef70dacdb42d92b730454c21d98abd545c3173a8492bf764cbb" gracePeriod=30 Mar 13 12:38:33.672082 master-0 kubenswrapper[7518]: I0313 12:38:33.669799 7518 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 13 12:38:33.672082 master-0 kubenswrapper[7518]: E0313 12:38:33.670122 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 13 12:38:33.672082 master-0 kubenswrapper[7518]: I0313 12:38:33.670150 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 13 12:38:33.672082 master-0 kubenswrapper[7518]: E0313 12:38:33.670166 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 13 12:38:33.672082 master-0 kubenswrapper[7518]: I0313 12:38:33.670173 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 13 12:38:33.672082 master-0 kubenswrapper[7518]: E0313 12:38:33.670186 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f951e49f-91f7-42d3-bc63-8117cff68d7a" containerName="installer" Mar 13 12:38:33.672082 master-0 kubenswrapper[7518]: I0313 12:38:33.670193 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="f951e49f-91f7-42d3-bc63-8117cff68d7a" containerName="installer" Mar 13 12:38:33.672082 master-0 kubenswrapper[7518]: I0313 12:38:33.670319 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="f951e49f-91f7-42d3-bc63-8117cff68d7a" containerName="installer" Mar 13 12:38:33.672082 master-0 kubenswrapper[7518]: I0313 12:38:33.670329 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 13 12:38:33.672082 master-0 kubenswrapper[7518]: I0313 12:38:33.670338 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 13 12:38:33.684778 master-0 kubenswrapper[7518]: I0313 12:38:33.684721 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.820868 master-0 kubenswrapper[7518]: I0313 12:38:33.820787 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.820868 master-0 kubenswrapper[7518]: I0313 12:38:33.820853 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.821226 master-0 kubenswrapper[7518]: I0313 12:38:33.820894 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.821226 master-0 kubenswrapper[7518]: I0313 12:38:33.820915 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.821226 master-0 kubenswrapper[7518]: I0313 12:38:33.820970 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.821226 master-0 kubenswrapper[7518]: I0313 12:38:33.821003 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.922082 master-0 kubenswrapper[7518]: I0313 12:38:33.921943 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.922082 master-0 kubenswrapper[7518]: I0313 12:38:33.922002 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.922082 master-0 kubenswrapper[7518]: I0313 12:38:33.922022 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.922372 master-0 kubenswrapper[7518]: I0313 12:38:33.922100 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.922372 master-0 kubenswrapper[7518]: I0313 12:38:33.922167 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.922372 master-0 kubenswrapper[7518]: I0313 12:38:33.922206 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.922372 master-0 kubenswrapper[7518]: I0313 12:38:33.922229 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.922372 master-0 kubenswrapper[7518]: I0313 12:38:33.922293 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.922372 master-0 kubenswrapper[7518]: I0313 12:38:33.922326 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.922372 master-0 kubenswrapper[7518]: I0313 12:38:33.922351 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.922602 master-0 kubenswrapper[7518]: I0313 12:38:33.922384 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:33.922602 master-0 kubenswrapper[7518]: I0313 12:38:33.922413 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:38:34.291390 master-0 kubenswrapper[7518]: I0313 12:38:34.291347 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-m7k6m" Mar 13 12:38:36.050842 master-0 kubenswrapper[7518]: I0313 12:38:36.050792 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" event={"ID":"604456a0-4997-43bc-87ef-283a002111fe","Type":"ContainerStarted","Data":"9aa1f145ab48777d89ac9dfc81bf096e64f9b0cd2ac874112d02e3968695fd07"} Mar 13 12:38:36.053360 master-0 kubenswrapper[7518]: I0313 12:38:36.053332 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r9lmb" event={"ID":"29b6aa89-0416-4595-9deb-10b290521d86","Type":"ContainerStarted","Data":"f5f546bf8bc521093435c3a37d1935913b054104ff18f66c3350a586cd1d543b"} Mar 13 12:38:36.053436 master-0 kubenswrapper[7518]: I0313 12:38:36.053367 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r9lmb" event={"ID":"29b6aa89-0416-4595-9deb-10b290521d86","Type":"ContainerStarted","Data":"3ba77838f198baa166eefe13343c56a2e5909dbc283e549c98debc0878afd8e2"} Mar 13 12:38:39.067093 master-0 kubenswrapper[7518]: I0313 12:38:39.067023 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" event={"ID":"3d653e1a-5903-4a02-9357-df145f028c0d","Type":"ContainerStarted","Data":"baf23d87752ea57aa0879a0f3cabb3d54da65ab6c1d69c34a044b8dc1883ed70"} Mar 13 12:38:39.067665 master-0 kubenswrapper[7518]: I0313 12:38:39.067130 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:38:39.068197 master-0 kubenswrapper[7518]: I0313 12:38:39.068131 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" event={"ID":"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3","Type":"ContainerStarted","Data":"9d05f0d44d2a573355b6b4eea02a702f641e31e420669a5e155b6a442793e880"} Mar 13 12:38:39.068421 master-0 kubenswrapper[7518]: I0313 12:38:39.068391 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:38:39.069290 master-0 kubenswrapper[7518]: I0313 12:38:39.069260 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" event={"ID":"d5a19b80-d488-46d3-a4a8-0b80361077e1","Type":"ContainerStarted","Data":"47e1707cfebdcd64e29e4d18bf48d4efe18567479faf12290a7bcd51f3b4d7e2"} Mar 13 12:38:39.069712 master-0 kubenswrapper[7518]: I0313 12:38:39.069687 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:38:39.072742 master-0 kubenswrapper[7518]: I0313 12:38:39.072715 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:38:39.073288 master-0 kubenswrapper[7518]: I0313 12:38:39.073271 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:38:43.574886 master-0 kubenswrapper[7518]: I0313 12:38:43.574789 7518 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-tc4ht container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Mar 13 12:38:43.574886 master-0 kubenswrapper[7518]: I0313 12:38:43.574880 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" podUID="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Mar 13 12:38:46.712792 master-0 kubenswrapper[7518]: E0313 12:38:46.712683 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 12:38:46.713931 master-0 kubenswrapper[7518]: I0313 12:38:46.713193 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 12:38:46.741445 master-0 kubenswrapper[7518]: W0313 12:38:46.741402 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e52bef89f4b50e4590a1719bcc5d7e5.slice/crio-c7a57c7991ded4305409d230ea7c2ca2ff3ee3cfa777170090aebea70df9b3bc WatchSource:0}: Error finding container c7a57c7991ded4305409d230ea7c2ca2ff3ee3cfa777170090aebea70df9b3bc: Status 404 returned error can't find the container with id c7a57c7991ded4305409d230ea7c2ca2ff3ee3cfa777170090aebea70df9b3bc Mar 13 12:38:47.111268 master-0 kubenswrapper[7518]: I0313 12:38:47.111180 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"19074dc73968560b828f5c5335186658b83f7db1641c16ec73e2170c5bea574e"} Mar 13 12:38:47.111498 master-0 kubenswrapper[7518]: I0313 12:38:47.111286 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"c7a57c7991ded4305409d230ea7c2ca2ff3ee3cfa777170090aebea70df9b3bc"} Mar 13 12:38:48.117312 master-0 kubenswrapper[7518]: I0313 12:38:48.117186 7518 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="19074dc73968560b828f5c5335186658b83f7db1641c16ec73e2170c5bea574e" exitCode=0 Mar 13 12:38:48.117312 master-0 kubenswrapper[7518]: I0313 12:38:48.117284 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"19074dc73968560b828f5c5335186658b83f7db1641c16ec73e2170c5bea574e"} Mar 13 12:38:48.121232 master-0 kubenswrapper[7518]: I0313 12:38:48.118709 7518 generic.go:334] "Generic (PLEG): container finished" podID="00d2e134-62bb-4181-aa0a-22c9b9755b10" containerID="1b3f3325d5e04c56ba72e3fc00c285b339f3ca147fcedd9041b736950ddeb5fa" exitCode=0 Mar 13 12:38:48.121232 master-0 kubenswrapper[7518]: I0313 12:38:48.118761 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"00d2e134-62bb-4181-aa0a-22c9b9755b10","Type":"ContainerDied","Data":"1b3f3325d5e04c56ba72e3fc00c285b339f3ca147fcedd9041b736950ddeb5fa"} Mar 13 12:38:48.350554 master-0 kubenswrapper[7518]: E0313 12:38:48.350426 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:38:38Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:38:38Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:38:38Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:38:38Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43\\\"],\\\"sizeBytes\\\":438654375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7\\\"],\\\"sizeBytes\\\":411585608},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7\\\"],\\\"sizeBytes\\\":407347126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3\\\"],\\\"sizeBytes\\\":396521759}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:38:48.761728 master-0 kubenswrapper[7518]: E0313 12:38:48.761665 7518 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:38:49.127963 master-0 kubenswrapper[7518]: I0313 12:38:49.127885 7518 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="5f035fb00c2f1c52dbc78fa55ac7bc8d27c14c42f3da11b968e1fb6e88e80856" exitCode=1 Mar 13 12:38:49.129675 master-0 kubenswrapper[7518]: I0313 12:38:49.128074 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"5f035fb00c2f1c52dbc78fa55ac7bc8d27c14c42f3da11b968e1fb6e88e80856"} Mar 13 12:38:49.129675 master-0 kubenswrapper[7518]: I0313 12:38:49.128115 7518 scope.go:117] "RemoveContainer" containerID="9976faf535c3de998191b8eb2224b47994a3c8d30cd6f57ea4e1d4aff13da677" Mar 13 12:38:49.129675 master-0 kubenswrapper[7518]: I0313 12:38:49.128637 7518 scope.go:117] "RemoveContainer" containerID="5f035fb00c2f1c52dbc78fa55ac7bc8d27c14c42f3da11b968e1fb6e88e80856" Mar 13 12:38:49.445381 master-0 kubenswrapper[7518]: I0313 12:38:49.445329 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 12:38:49.582879 master-0 kubenswrapper[7518]: I0313 12:38:49.582815 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/00d2e134-62bb-4181-aa0a-22c9b9755b10-var-lock\") pod \"00d2e134-62bb-4181-aa0a-22c9b9755b10\" (UID: \"00d2e134-62bb-4181-aa0a-22c9b9755b10\") " Mar 13 12:38:49.583042 master-0 kubenswrapper[7518]: I0313 12:38:49.582891 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00d2e134-62bb-4181-aa0a-22c9b9755b10-kube-api-access\") pod \"00d2e134-62bb-4181-aa0a-22c9b9755b10\" (UID: \"00d2e134-62bb-4181-aa0a-22c9b9755b10\") " Mar 13 12:38:49.583042 master-0 kubenswrapper[7518]: I0313 12:38:49.582959 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00d2e134-62bb-4181-aa0a-22c9b9755b10-var-lock" (OuterVolumeSpecName: "var-lock") pod "00d2e134-62bb-4181-aa0a-22c9b9755b10" (UID: "00d2e134-62bb-4181-aa0a-22c9b9755b10"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:49.583042 master-0 kubenswrapper[7518]: I0313 12:38:49.583001 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00d2e134-62bb-4181-aa0a-22c9b9755b10-kubelet-dir\") pod \"00d2e134-62bb-4181-aa0a-22c9b9755b10\" (UID: \"00d2e134-62bb-4181-aa0a-22c9b9755b10\") " Mar 13 12:38:49.583302 master-0 kubenswrapper[7518]: I0313 12:38:49.583282 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00d2e134-62bb-4181-aa0a-22c9b9755b10-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "00d2e134-62bb-4181-aa0a-22c9b9755b10" (UID: "00d2e134-62bb-4181-aa0a-22c9b9755b10"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:49.583396 master-0 kubenswrapper[7518]: I0313 12:38:49.583304 7518 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/00d2e134-62bb-4181-aa0a-22c9b9755b10-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:49.587700 master-0 kubenswrapper[7518]: I0313 12:38:49.587663 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d2e134-62bb-4181-aa0a-22c9b9755b10-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "00d2e134-62bb-4181-aa0a-22c9b9755b10" (UID: "00d2e134-62bb-4181-aa0a-22c9b9755b10"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:38:49.684963 master-0 kubenswrapper[7518]: I0313 12:38:49.684839 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00d2e134-62bb-4181-aa0a-22c9b9755b10-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:49.684963 master-0 kubenswrapper[7518]: I0313 12:38:49.684883 7518 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00d2e134-62bb-4181-aa0a-22c9b9755b10-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:50.137388 master-0 kubenswrapper[7518]: I0313 12:38:50.137259 7518 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="23aef1d459d801451207b22b103d82e16b0fb29eac9febd8e8918cd59b44679c" exitCode=1 Mar 13 12:38:50.138215 master-0 kubenswrapper[7518]: I0313 12:38:50.137404 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"23aef1d459d801451207b22b103d82e16b0fb29eac9febd8e8918cd59b44679c"} Mar 13 12:38:50.138977 master-0 kubenswrapper[7518]: I0313 12:38:50.138919 7518 scope.go:117] "RemoveContainer" containerID="23aef1d459d801451207b22b103d82e16b0fb29eac9febd8e8918cd59b44679c" Mar 13 12:38:50.142085 master-0 kubenswrapper[7518]: I0313 12:38:50.141946 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"696192325e102818ab8863a16ab52b3671d6dc3f225d1e0faf06a32633060bda"} Mar 13 12:38:50.144081 master-0 kubenswrapper[7518]: I0313 12:38:50.144043 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"00d2e134-62bb-4181-aa0a-22c9b9755b10","Type":"ContainerDied","Data":"dd20eff6c17b5d26b931e6d943bd09e05bef7d7025ee5b4bd9d525e64901dc81"} Mar 13 12:38:50.144225 master-0 kubenswrapper[7518]: I0313 12:38:50.144088 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd20eff6c17b5d26b931e6d943bd09e05bef7d7025ee5b4bd9d525e64901dc81" Mar 13 12:38:50.144391 master-0 kubenswrapper[7518]: I0313 12:38:50.144370 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 12:38:51.150854 master-0 kubenswrapper[7518]: I0313 12:38:51.150802 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"74509294773fbb5f73a8dd8c9003ceebee4b1e194cad14d7465b52eca3b8eaab"} Mar 13 12:38:53.574597 master-0 kubenswrapper[7518]: I0313 12:38:53.574476 7518 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-tc4ht container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Mar 13 12:38:53.574597 master-0 kubenswrapper[7518]: I0313 12:38:53.574564 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" podUID="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Mar 13 12:38:54.147352 master-0 kubenswrapper[7518]: I0313 12:38:54.147293 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:55.216673 master-0 kubenswrapper[7518]: I0313 12:38:55.216613 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:57.147836 master-0 kubenswrapper[7518]: I0313 12:38:57.147761 7518 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:38:58.351665 master-0 kubenswrapper[7518]: E0313 12:38:58.351575 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:38:58.762933 master-0 kubenswrapper[7518]: E0313 12:38:58.762800 7518 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:39:00.200955 master-0 kubenswrapper[7518]: I0313 12:39:00.200903 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-hj2wk_8c62b15f-001a-4b64-b85f-348aefde5d1b/openshift-controller-manager-operator/0.log" Mar 13 12:39:00.201457 master-0 kubenswrapper[7518]: I0313 12:39:00.200976 7518 generic.go:334] "Generic (PLEG): container finished" podID="8c62b15f-001a-4b64-b85f-348aefde5d1b" containerID="50a86534e82c318c07e40c2eda167d8236002efbe5ace1ee2b94525f4f64c25b" exitCode=1 Mar 13 12:39:00.201457 master-0 kubenswrapper[7518]: I0313 12:39:00.201017 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" event={"ID":"8c62b15f-001a-4b64-b85f-348aefde5d1b","Type":"ContainerDied","Data":"50a86534e82c318c07e40c2eda167d8236002efbe5ace1ee2b94525f4f64c25b"} Mar 13 12:39:00.201561 master-0 kubenswrapper[7518]: I0313 12:39:00.201541 7518 scope.go:117] "RemoveContainer" containerID="50a86534e82c318c07e40c2eda167d8236002efbe5ace1ee2b94525f4f64c25b" Mar 13 12:39:01.123814 master-0 kubenswrapper[7518]: E0313 12:39:01.123749 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 12:39:01.208231 master-0 kubenswrapper[7518]: I0313 12:39:01.208180 7518 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="c8e034500e686ef70dacdb42d92b730454c21d98abd545c3173a8492bf764cbb" exitCode=0 Mar 13 12:39:01.210683 master-0 kubenswrapper[7518]: I0313 12:39:01.210647 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-hj2wk_8c62b15f-001a-4b64-b85f-348aefde5d1b/openshift-controller-manager-operator/0.log" Mar 13 12:39:01.210762 master-0 kubenswrapper[7518]: I0313 12:39:01.210696 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" event={"ID":"8c62b15f-001a-4b64-b85f-348aefde5d1b","Type":"ContainerStarted","Data":"0c1cf11fba8779c80d0da5e273c773daa5eb397179aa4efedaa5ea11988b99ed"} Mar 13 12:39:02.218632 master-0 kubenswrapper[7518]: I0313 12:39:02.218535 7518 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="51708dbfd880bb781044065864d488ed11f7e85098ff14393855c88e1ae496df" exitCode=0 Mar 13 12:39:02.219389 master-0 kubenswrapper[7518]: I0313 12:39:02.218638 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"51708dbfd880bb781044065864d488ed11f7e85098ff14393855c88e1ae496df"} Mar 13 12:39:03.574505 master-0 kubenswrapper[7518]: I0313 12:39:03.574436 7518 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-tc4ht container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Mar 13 12:39:03.575555 master-0 kubenswrapper[7518]: I0313 12:39:03.575446 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" podUID="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Mar 13 12:39:03.575705 master-0 kubenswrapper[7518]: I0313 12:39:03.575616 7518 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:39:03.576903 master-0 kubenswrapper[7518]: I0313 12:39:03.576832 7518 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"13a298fff8d915caaf89a785573e9b3488b88852d2c326a75e61c523b3cd60a0"} pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 13 12:39:03.576984 master-0 kubenswrapper[7518]: I0313 12:39:03.576951 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" podUID="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" containerName="authentication-operator" containerID="cri-o://13a298fff8d915caaf89a785573e9b3488b88852d2c326a75e61c523b3cd60a0" gracePeriod=30 Mar 13 12:39:03.850675 master-0 kubenswrapper[7518]: I0313 12:39:03.850631 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 13 12:39:03.850861 master-0 kubenswrapper[7518]: I0313 12:39:03.850735 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:39:04.026818 master-0 kubenswrapper[7518]: I0313 12:39:04.026735 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 13 12:39:04.027035 master-0 kubenswrapper[7518]: I0313 12:39:04.026906 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs" (OuterVolumeSpecName: "certs") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:39:04.027035 master-0 kubenswrapper[7518]: I0313 12:39:04.026926 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 13 12:39:04.027035 master-0 kubenswrapper[7518]: I0313 12:39:04.026959 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir" (OuterVolumeSpecName: "data-dir") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:39:04.027203 master-0 kubenswrapper[7518]: I0313 12:39:04.027171 7518 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:04.027203 master-0 kubenswrapper[7518]: I0313 12:39:04.027194 7518 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:04.232302 master-0 kubenswrapper[7518]: I0313 12:39:04.232202 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 13 12:39:04.232302 master-0 kubenswrapper[7518]: I0313 12:39:04.232247 7518 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="e408fc0e8cb4ee12255385245e6376d6aaefa9c98b225370a726fb0b9f89662c" exitCode=137 Mar 13 12:39:04.232302 master-0 kubenswrapper[7518]: I0313 12:39:04.232297 7518 scope.go:117] "RemoveContainer" containerID="c8e034500e686ef70dacdb42d92b730454c21d98abd545c3173a8492bf764cbb" Mar 13 12:39:04.232701 master-0 kubenswrapper[7518]: I0313 12:39:04.232411 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:39:04.267789 master-0 kubenswrapper[7518]: I0313 12:39:04.267750 7518 scope.go:117] "RemoveContainer" containerID="e408fc0e8cb4ee12255385245e6376d6aaefa9c98b225370a726fb0b9f89662c" Mar 13 12:39:04.281535 master-0 kubenswrapper[7518]: I0313 12:39:04.281497 7518 scope.go:117] "RemoveContainer" containerID="c8e034500e686ef70dacdb42d92b730454c21d98abd545c3173a8492bf764cbb" Mar 13 12:39:04.282217 master-0 kubenswrapper[7518]: E0313 12:39:04.282166 7518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8e034500e686ef70dacdb42d92b730454c21d98abd545c3173a8492bf764cbb\": container with ID starting with c8e034500e686ef70dacdb42d92b730454c21d98abd545c3173a8492bf764cbb not found: ID does not exist" containerID="c8e034500e686ef70dacdb42d92b730454c21d98abd545c3173a8492bf764cbb" Mar 13 12:39:04.282217 master-0 kubenswrapper[7518]: I0313 12:39:04.282198 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e034500e686ef70dacdb42d92b730454c21d98abd545c3173a8492bf764cbb"} err="failed to get container status \"c8e034500e686ef70dacdb42d92b730454c21d98abd545c3173a8492bf764cbb\": rpc error: code = NotFound desc = could not find container \"c8e034500e686ef70dacdb42d92b730454c21d98abd545c3173a8492bf764cbb\": container with ID starting with c8e034500e686ef70dacdb42d92b730454c21d98abd545c3173a8492bf764cbb not found: ID does not exist" Mar 13 12:39:04.282323 master-0 kubenswrapper[7518]: I0313 12:39:04.282227 7518 scope.go:117] "RemoveContainer" containerID="e408fc0e8cb4ee12255385245e6376d6aaefa9c98b225370a726fb0b9f89662c" Mar 13 12:39:04.282688 master-0 kubenswrapper[7518]: E0313 12:39:04.282659 7518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e408fc0e8cb4ee12255385245e6376d6aaefa9c98b225370a726fb0b9f89662c\": container with ID starting with e408fc0e8cb4ee12255385245e6376d6aaefa9c98b225370a726fb0b9f89662c not found: ID does not exist" containerID="e408fc0e8cb4ee12255385245e6376d6aaefa9c98b225370a726fb0b9f89662c" Mar 13 12:39:04.282736 master-0 kubenswrapper[7518]: I0313 12:39:04.282686 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e408fc0e8cb4ee12255385245e6376d6aaefa9c98b225370a726fb0b9f89662c"} err="failed to get container status \"e408fc0e8cb4ee12255385245e6376d6aaefa9c98b225370a726fb0b9f89662c\": rpc error: code = NotFound desc = could not find container \"e408fc0e8cb4ee12255385245e6376d6aaefa9c98b225370a726fb0b9f89662c\": container with ID starting with e408fc0e8cb4ee12255385245e6376d6aaefa9c98b225370a726fb0b9f89662c not found: ID does not exist" Mar 13 12:39:05.605914 master-0 kubenswrapper[7518]: I0313 12:39:05.605844 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354f29997baa583b6238f7de9108ee10" path="/var/lib/kubelet/pods/354f29997baa583b6238f7de9108ee10/volumes" Mar 13 12:39:05.607667 master-0 kubenswrapper[7518]: I0313 12:39:05.606316 7518 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 12:39:07.146842 master-0 kubenswrapper[7518]: I0313 12:39:07.146746 7518 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:39:07.668891 master-0 kubenswrapper[7518]: E0313 12:39:07.668717 7518 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c66eb33bc4100 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:38:33.648455936 +0000 UTC m=+68.281525123,LastTimestamp:2026-03-13 12:38:33.648455936 +0000 UTC m=+68.281525123,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:39:08.353097 master-0 kubenswrapper[7518]: E0313 12:39:08.353027 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:39:08.764358 master-0 kubenswrapper[7518]: E0313 12:39:08.764285 7518 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:39:09.267967 master-0 kubenswrapper[7518]: I0313 12:39:09.267884 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_88bf0bf8-c0ee-454e-8d8b-592a6e796cfc/installer/0.log" Mar 13 12:39:09.267967 master-0 kubenswrapper[7518]: I0313 12:39:09.267943 7518 generic.go:334] "Generic (PLEG): container finished" podID="88bf0bf8-c0ee-454e-8d8b-592a6e796cfc" containerID="f528e329070374fe2c7b4c96e9e572f6132a46e0533c48dae8a60425fcb61903" exitCode=1 Mar 13 12:39:09.279727 master-0 kubenswrapper[7518]: I0313 12:39:09.279619 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_3828446d-a3e3-412f-a0e7-7347b5de523a/installer/0.log" Mar 13 12:39:09.279951 master-0 kubenswrapper[7518]: I0313 12:39:09.279754 7518 generic.go:334] "Generic (PLEG): container finished" podID="3828446d-a3e3-412f-a0e7-7347b5de523a" containerID="aa6c714b8707274c998afed0944ecc8600d7bc24a6b08415b6cbef112b436b47" exitCode=1 Mar 13 12:39:12.297400 master-0 kubenswrapper[7518]: I0313 12:39:12.297351 7518 generic.go:334] "Generic (PLEG): container finished" podID="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" containerID="13a298fff8d915caaf89a785573e9b3488b88852d2c326a75e61c523b3cd60a0" exitCode=0 Mar 13 12:39:15.226942 master-0 kubenswrapper[7518]: E0313 12:39:15.226872 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 12:39:16.280533 master-0 kubenswrapper[7518]: I0313 12:39:16.280484 7518 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-hjzms container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.5:8443/healthz\": dial tcp 10.128.0.5:8443: connect: connection refused" start-of-body= Mar 13 12:39:16.281225 master-0 kubenswrapper[7518]: I0313 12:39:16.281188 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" podUID="15b592d6-3c48-45d4-9172-d28632ae8995" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.5:8443/healthz\": dial tcp 10.128.0.5:8443: connect: connection refused" Mar 13 12:39:16.322078 master-0 kubenswrapper[7518]: I0313 12:39:16.322020 7518 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="a24502cdbf57f3af530c16b279d90e04b37d8116542797b27db1c42bb0ece279" exitCode=0 Mar 13 12:39:17.147265 master-0 kubenswrapper[7518]: I0313 12:39:17.147095 7518 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:39:17.328690 master-0 kubenswrapper[7518]: I0313 12:39:17.328612 7518 generic.go:334] "Generic (PLEG): container finished" podID="0da84bb7-e936-49a0-96b5-614a1305d6a4" containerID="7049109a836522af070e6bb63ef4a03a6cf57954c7a7d1ea2471e59144150127" exitCode=0 Mar 13 12:39:18.354557 master-0 kubenswrapper[7518]: E0313 12:39:18.354416 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:39:18.765645 master-0 kubenswrapper[7518]: E0313 12:39:18.765431 7518 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:39:19.341638 master-0 kubenswrapper[7518]: I0313 12:39:19.341578 7518 generic.go:334] "Generic (PLEG): container finished" podID="77ef7e49-eb85-4f5e-94d3-a6a8619a6243" containerID="3add725e66228351c75651bb4a7357a39de488d2f8d517621841a317712aba3a" exitCode=0 Mar 13 12:39:25.376994 master-0 kubenswrapper[7518]: I0313 12:39:25.376927 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-kh6n9_4dd0fc2f-f2ee-4447-a747-04a178288cf0/network-operator/0.log" Mar 13 12:39:25.377622 master-0 kubenswrapper[7518]: I0313 12:39:25.377002 7518 generic.go:334] "Generic (PLEG): container finished" podID="4dd0fc2f-f2ee-4447-a747-04a178288cf0" containerID="638f7edbf4d5a7bd9c1277ff74b0deabee140db71794ce849e8ed2fe8e2bdb95" exitCode=255 Mar 13 12:39:28.355527 master-0 kubenswrapper[7518]: E0313 12:39:28.355423 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:39:28.355527 master-0 kubenswrapper[7518]: E0313 12:39:28.355498 7518 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 12:39:28.766390 master-0 kubenswrapper[7518]: E0313 12:39:28.766203 7518 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:39:28.766390 master-0 kubenswrapper[7518]: I0313 12:39:28.766304 7518 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 12:39:29.330252 master-0 kubenswrapper[7518]: E0313 12:39:29.329970 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 12:39:29.400345 master-0 kubenswrapper[7518]: I0313 12:39:29.400278 7518 generic.go:334] "Generic (PLEG): container finished" podID="f5775266-5e58-44ed-81cb-dfe3faf38add" containerID="b93548b4b4252ac17adfb04acbab06411e860b90fed7b1160d6dcde46321cd0a" exitCode=0 Mar 13 12:39:32.426303 master-0 kubenswrapper[7518]: I0313 12:39:32.426215 7518 generic.go:334] "Generic (PLEG): container finished" podID="ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118" containerID="69f6736e401004be8e5844a5f9b7891b28a4228a05eb13fc36ff3b64b8740138" exitCode=0 Mar 13 12:39:32.428438 master-0 kubenswrapper[7518]: I0313 12:39:32.428402 7518 generic.go:334] "Generic (PLEG): container finished" podID="034aaf8e-95df-4171-bae4-e7abe58d15f7" containerID="c27448fad258056de304ba3c30b9268468cc1c542046d6c37c21797efa146b54" exitCode=0 Mar 13 12:39:33.439479 master-0 kubenswrapper[7518]: I0313 12:39:33.439376 7518 generic.go:334] "Generic (PLEG): container finished" podID="887d261f-d07f-4ef0-a230-6568f47acf4d" containerID="ac30e49a3ae0e3ef59ed9c3728ae1c26bf004ec3b0fe4cf00ec315598faa9cf4" exitCode=0 Mar 13 12:39:34.293274 master-0 kubenswrapper[7518]: I0313 12:39:34.293192 7518 status_manager.go:851] "Failed to get status for pod" podUID="ef42b65e-2d92-46ac-baaf-30e213787781" pod="openshift-dns/dns-default-m7k6m" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods dns-default-m7k6m)" Mar 13 12:39:34.448520 master-0 kubenswrapper[7518]: I0313 12:39:34.448459 7518 generic.go:334] "Generic (PLEG): container finished" podID="15b592d6-3c48-45d4-9172-d28632ae8995" containerID="c3cc4d20a3385510f2813df129cea65d1b836444e4586b47995a2d6b48933eba" exitCode=0 Mar 13 12:39:37.465899 master-0 kubenswrapper[7518]: I0313 12:39:37.465784 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-qg8q5_1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/approver/0.log" Mar 13 12:39:37.466406 master-0 kubenswrapper[7518]: I0313 12:39:37.466054 7518 generic.go:334] "Generic (PLEG): container finished" podID="1f43b4e7-5cd1-46d2-a02e-0d846b2e5182" containerID="8c3d9fdbcfd0987b6eb3f7869d1d1d034470ad27e956a473bf9fb468daecb5e8" exitCode=1 Mar 13 12:39:38.766893 master-0 kubenswrapper[7518]: E0313 12:39:38.766769 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 13 12:39:39.479106 master-0 kubenswrapper[7518]: I0313 12:39:39.478975 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-rfqb9_d5a19b80-d488-46d3-a4a8-0b80361077e1/olm-operator/0.log" Mar 13 12:39:39.479106 master-0 kubenswrapper[7518]: I0313 12:39:39.479040 7518 generic.go:334] "Generic (PLEG): container finished" podID="d5a19b80-d488-46d3-a4a8-0b80361077e1" containerID="47e1707cfebdcd64e29e4d18bf48d4efe18567479faf12290a7bcd51f3b4d7e2" exitCode=1 Mar 13 12:39:39.480579 master-0 kubenswrapper[7518]: I0313 12:39:39.480540 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-tlnkd_10944f9c-8ce9-44e6-9c36-a0ea19d8cae3/catalog-operator/0.log" Mar 13 12:39:39.480657 master-0 kubenswrapper[7518]: I0313 12:39:39.480584 7518 generic.go:334] "Generic (PLEG): container finished" podID="10944f9c-8ce9-44e6-9c36-a0ea19d8cae3" containerID="9d05f0d44d2a573355b6b4eea02a702f641e31e420669a5e155b6a442793e880" exitCode=1 Mar 13 12:39:39.610637 master-0 kubenswrapper[7518]: E0313 12:39:39.610573 7518 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:39:39.610914 master-0 kubenswrapper[7518]: E0313 12:39:39.610742 7518 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.013s" Mar 13 12:39:39.610914 master-0 kubenswrapper[7518]: I0313 12:39:39.610788 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:39:39.617680 master-0 kubenswrapper[7518]: I0313 12:39:39.617609 7518 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 12:39:40.773824 master-0 kubenswrapper[7518]: I0313 12:39:40.773751 7518 patch_prober.go:28] interesting pod/catalog-operator-7d9c49f57b-tlnkd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 13 12:39:40.774619 master-0 kubenswrapper[7518]: I0313 12:39:40.773842 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" podUID="10944f9c-8ce9-44e6-9c36-a0ea19d8cae3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 13 12:39:40.774619 master-0 kubenswrapper[7518]: I0313 12:39:40.773751 7518 patch_prober.go:28] interesting pod/catalog-operator-7d9c49f57b-tlnkd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 13 12:39:40.774619 master-0 kubenswrapper[7518]: I0313 12:39:40.773961 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" podUID="10944f9c-8ce9-44e6-9c36-a0ea19d8cae3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 13 12:39:40.778472 master-0 kubenswrapper[7518]: I0313 12:39:40.778400 7518 patch_prober.go:28] interesting pod/olm-operator-d64cfc9db-rfqb9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" start-of-body= Mar 13 12:39:40.778472 master-0 kubenswrapper[7518]: I0313 12:39:40.778455 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" podUID="d5a19b80-d488-46d3-a4a8-0b80361077e1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" Mar 13 12:39:40.779048 master-0 kubenswrapper[7518]: I0313 12:39:40.778526 7518 patch_prober.go:28] interesting pod/olm-operator-d64cfc9db-rfqb9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" start-of-body= Mar 13 12:39:40.779048 master-0 kubenswrapper[7518]: I0313 12:39:40.778572 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" podUID="d5a19b80-d488-46d3-a4a8-0b80361077e1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" Mar 13 12:39:41.670847 master-0 kubenswrapper[7518]: E0313 12:39:41.670705 7518 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{network-metrics-daemon-r9lmb.189c66eb94af50ee openshift-multus 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-multus,Name:network-metrics-daemon-r9lmb,UID:29b6aa89-0416-4595-9deb-10b290521d86,APIVersion:v1,ResourceVersion:3460,FieldPath:spec.containers{network-metrics-daemon},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\" in 3.877s (3.877s including waiting). Image size: 448828105 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:38:35.274997998 +0000 UTC m=+69.908067195,LastTimestamp:2026-03-13 12:38:35.274997998 +0000 UTC m=+69.908067195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:39:48.611016 master-0 kubenswrapper[7518]: E0313 12:39:48.610814 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:39:38Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:39:38Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:39:38Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:39:38Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43\\\"],\\\"sizeBytes\\\":438654375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7\\\"],\\\"sizeBytes\\\":411585608},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7\\\"],\\\"sizeBytes\\\":407347126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3\\\"],\\\"sizeBytes\\\":396521759}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:39:48.612617 master-0 kubenswrapper[7518]: I0313 12:39:48.612570 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_bfabb495-1707-4c3d-b00e-2f3b2976fb92/installer/0.log" Mar 13 12:39:48.612923 master-0 kubenswrapper[7518]: I0313 12:39:48.612641 7518 generic.go:334] "Generic (PLEG): container finished" podID="bfabb495-1707-4c3d-b00e-2f3b2976fb92" containerID="d8cf37e4c8a527d04eff5203f40779f993e328715e0f8f8ef7b2ff90bad966cf" exitCode=1 Mar 13 12:39:48.969663 master-0 kubenswrapper[7518]: E0313 12:39:48.969481 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 13 12:39:50.624089 master-0 kubenswrapper[7518]: I0313 12:39:50.624023 7518 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="696192325e102818ab8863a16ab52b3671d6dc3f225d1e0faf06a32633060bda" exitCode=1 Mar 13 12:39:50.774549 master-0 kubenswrapper[7518]: I0313 12:39:50.774434 7518 patch_prober.go:28] interesting pod/catalog-operator-7d9c49f57b-tlnkd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 13 12:39:50.774843 master-0 kubenswrapper[7518]: I0313 12:39:50.774439 7518 patch_prober.go:28] interesting pod/catalog-operator-7d9c49f57b-tlnkd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 13 12:39:50.774843 master-0 kubenswrapper[7518]: I0313 12:39:50.774591 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" podUID="10944f9c-8ce9-44e6-9c36-a0ea19d8cae3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 13 12:39:50.774843 master-0 kubenswrapper[7518]: I0313 12:39:50.774668 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" podUID="10944f9c-8ce9-44e6-9c36-a0ea19d8cae3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 13 12:39:50.778299 master-0 kubenswrapper[7518]: I0313 12:39:50.778246 7518 patch_prober.go:28] interesting pod/olm-operator-d64cfc9db-rfqb9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" start-of-body= Mar 13 12:39:50.778585 master-0 kubenswrapper[7518]: I0313 12:39:50.778526 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" podUID="d5a19b80-d488-46d3-a4a8-0b80361077e1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" Mar 13 12:39:50.778919 master-0 kubenswrapper[7518]: I0313 12:39:50.778879 7518 patch_prober.go:28] interesting pod/olm-operator-d64cfc9db-rfqb9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" start-of-body= Mar 13 12:39:50.779212 master-0 kubenswrapper[7518]: I0313 12:39:50.779125 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" podUID="d5a19b80-d488-46d3-a4a8-0b80361077e1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" Mar 13 12:39:58.613399 master-0 kubenswrapper[7518]: E0313 12:39:58.613281 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:39:59.371722 master-0 kubenswrapper[7518]: E0313 12:39:59.371614 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 13 12:40:00.774560 master-0 kubenswrapper[7518]: I0313 12:40:00.774491 7518 patch_prober.go:28] interesting pod/catalog-operator-7d9c49f57b-tlnkd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 13 12:40:00.775105 master-0 kubenswrapper[7518]: I0313 12:40:00.774581 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" podUID="10944f9c-8ce9-44e6-9c36-a0ea19d8cae3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 13 12:40:00.775105 master-0 kubenswrapper[7518]: I0313 12:40:00.774483 7518 patch_prober.go:28] interesting pod/catalog-operator-7d9c49f57b-tlnkd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 13 12:40:00.775105 master-0 kubenswrapper[7518]: I0313 12:40:00.774659 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" podUID="10944f9c-8ce9-44e6-9c36-a0ea19d8cae3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 13 12:40:00.778272 master-0 kubenswrapper[7518]: I0313 12:40:00.778229 7518 patch_prober.go:28] interesting pod/olm-operator-d64cfc9db-rfqb9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" start-of-body= Mar 13 12:40:00.778354 master-0 kubenswrapper[7518]: I0313 12:40:00.778287 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" podUID="d5a19b80-d488-46d3-a4a8-0b80361077e1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" Mar 13 12:40:00.778354 master-0 kubenswrapper[7518]: I0313 12:40:00.778298 7518 patch_prober.go:28] interesting pod/olm-operator-d64cfc9db-rfqb9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" start-of-body= Mar 13 12:40:00.778436 master-0 kubenswrapper[7518]: I0313 12:40:00.778341 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" podUID="d5a19b80-d488-46d3-a4a8-0b80361077e1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" Mar 13 12:40:08.614752 master-0 kubenswrapper[7518]: E0313 12:40:08.614645 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:40:10.172621 master-0 kubenswrapper[7518]: E0313 12:40:10.172533 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 13 12:40:10.774012 master-0 kubenswrapper[7518]: I0313 12:40:10.773883 7518 patch_prober.go:28] interesting pod/catalog-operator-7d9c49f57b-tlnkd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 13 12:40:10.774387 master-0 kubenswrapper[7518]: I0313 12:40:10.774009 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" podUID="10944f9c-8ce9-44e6-9c36-a0ea19d8cae3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 13 12:40:10.779194 master-0 kubenswrapper[7518]: I0313 12:40:10.779051 7518 patch_prober.go:28] interesting pod/olm-operator-d64cfc9db-rfqb9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" start-of-body= Mar 13 12:40:10.779419 master-0 kubenswrapper[7518]: I0313 12:40:10.779195 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" podUID="d5a19b80-d488-46d3-a4a8-0b80361077e1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" Mar 13 12:40:13.620796 master-0 kubenswrapper[7518]: E0313 12:40:13.620630 7518 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:40:13.621825 master-0 kubenswrapper[7518]: E0313 12:40:13.621300 7518 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.01s" Mar 13 12:40:13.621825 master-0 kubenswrapper[7518]: I0313 12:40:13.621521 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:40:13.621825 master-0 kubenswrapper[7518]: I0313 12:40:13.621624 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:40:13.621825 master-0 kubenswrapper[7518]: I0313 12:40:13.621656 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc","Type":"ContainerDied","Data":"f528e329070374fe2c7b4c96e9e572f6132a46e0533c48dae8a60425fcb61903"} Mar 13 12:40:13.623092 master-0 kubenswrapper[7518]: I0313 12:40:13.622406 7518 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:40:13.623312 master-0 kubenswrapper[7518]: I0313 12:40:13.622871 7518 scope.go:117] "RemoveContainer" containerID="696192325e102818ab8863a16ab52b3671d6dc3f225d1e0faf06a32633060bda" Mar 13 12:40:13.624045 master-0 kubenswrapper[7518]: I0313 12:40:13.623978 7518 scope.go:117] "RemoveContainer" containerID="9d05f0d44d2a573355b6b4eea02a702f641e31e420669a5e155b6a442793e880" Mar 13 12:40:13.630653 master-0 kubenswrapper[7518]: I0313 12:40:13.630319 7518 scope.go:117] "RemoveContainer" containerID="638f7edbf4d5a7bd9c1277ff74b0deabee140db71794ce849e8ed2fe8e2bdb95" Mar 13 12:40:13.630653 master-0 kubenswrapper[7518]: I0313 12:40:13.630568 7518 scope.go:117] "RemoveContainer" containerID="b93548b4b4252ac17adfb04acbab06411e860b90fed7b1160d6dcde46321cd0a" Mar 13 12:40:13.632702 master-0 kubenswrapper[7518]: I0313 12:40:13.632625 7518 scope.go:117] "RemoveContainer" containerID="c27448fad258056de304ba3c30b9268468cc1c542046d6c37c21797efa146b54" Mar 13 12:40:13.633824 master-0 kubenswrapper[7518]: I0313 12:40:13.633764 7518 scope.go:117] "RemoveContainer" containerID="7049109a836522af070e6bb63ef4a03a6cf57954c7a7d1ea2471e59144150127" Mar 13 12:40:13.634956 master-0 kubenswrapper[7518]: I0313 12:40:13.634869 7518 scope.go:117] "RemoveContainer" containerID="3add725e66228351c75651bb4a7357a39de488d2f8d517621841a317712aba3a" Mar 13 12:40:13.635394 master-0 kubenswrapper[7518]: I0313 12:40:13.635343 7518 scope.go:117] "RemoveContainer" containerID="69f6736e401004be8e5844a5f9b7891b28a4228a05eb13fc36ff3b64b8740138" Mar 13 12:40:13.637631 master-0 kubenswrapper[7518]: I0313 12:40:13.637556 7518 scope.go:117] "RemoveContainer" containerID="c3cc4d20a3385510f2813df129cea65d1b836444e4586b47995a2d6b48933eba" Mar 13 12:40:13.698329 master-0 kubenswrapper[7518]: I0313 12:40:13.698289 7518 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 12:40:13.700425 master-0 kubenswrapper[7518]: I0313 12:40:13.700366 7518 scope.go:117] "RemoveContainer" containerID="47e1707cfebdcd64e29e4d18bf48d4efe18567479faf12290a7bcd51f3b4d7e2" Mar 13 12:40:13.700966 master-0 kubenswrapper[7518]: I0313 12:40:13.700849 7518 scope.go:117] "RemoveContainer" containerID="8c3d9fdbcfd0987b6eb3f7869d1d1d034470ad27e956a473bf9fb468daecb5e8" Mar 13 12:40:14.090815 master-0 kubenswrapper[7518]: I0313 12:40:14.090524 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_88bf0bf8-c0ee-454e-8d8b-592a6e796cfc/installer/0.log" Mar 13 12:40:14.090815 master-0 kubenswrapper[7518]: I0313 12:40:14.090626 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:40:14.217194 master-0 kubenswrapper[7518]: I0313 12:40:14.216497 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-kube-api-access\") pod \"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc\" (UID: \"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc\") " Mar 13 12:40:14.217194 master-0 kubenswrapper[7518]: I0313 12:40:14.216533 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-kubelet-dir\") pod \"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc\" (UID: \"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc\") " Mar 13 12:40:14.217194 master-0 kubenswrapper[7518]: I0313 12:40:14.216611 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-var-lock\") pod \"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc\" (UID: \"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc\") " Mar 13 12:40:14.217194 master-0 kubenswrapper[7518]: I0313 12:40:14.216845 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-var-lock" (OuterVolumeSpecName: "var-lock") pod "88bf0bf8-c0ee-454e-8d8b-592a6e796cfc" (UID: "88bf0bf8-c0ee-454e-8d8b-592a6e796cfc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:14.217194 master-0 kubenswrapper[7518]: I0313 12:40:14.216830 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "88bf0bf8-c0ee-454e-8d8b-592a6e796cfc" (UID: "88bf0bf8-c0ee-454e-8d8b-592a6e796cfc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:14.225595 master-0 kubenswrapper[7518]: I0313 12:40:14.223634 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "88bf0bf8-c0ee-454e-8d8b-592a6e796cfc" (UID: "88bf0bf8-c0ee-454e-8d8b-592a6e796cfc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:40:14.253452 master-0 kubenswrapper[7518]: I0313 12:40:14.253400 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_bfabb495-1707-4c3d-b00e-2f3b2976fb92/installer/0.log" Mar 13 12:40:14.253656 master-0 kubenswrapper[7518]: I0313 12:40:14.253484 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:40:14.256471 master-0 kubenswrapper[7518]: I0313 12:40:14.256415 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_3828446d-a3e3-412f-a0e7-7347b5de523a/installer/0.log" Mar 13 12:40:14.256471 master-0 kubenswrapper[7518]: I0313 12:40:14.256470 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:40:14.318373 master-0 kubenswrapper[7518]: I0313 12:40:14.318321 7518 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:14.318373 master-0 kubenswrapper[7518]: I0313 12:40:14.318356 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:14.318373 master-0 kubenswrapper[7518]: I0313 12:40:14.318371 7518 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88bf0bf8-c0ee-454e-8d8b-592a6e796cfc-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:14.419289 master-0 kubenswrapper[7518]: I0313 12:40:14.419000 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3828446d-a3e3-412f-a0e7-7347b5de523a-kubelet-dir\") pod \"3828446d-a3e3-412f-a0e7-7347b5de523a\" (UID: \"3828446d-a3e3-412f-a0e7-7347b5de523a\") " Mar 13 12:40:14.419289 master-0 kubenswrapper[7518]: I0313 12:40:14.419121 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bfabb495-1707-4c3d-b00e-2f3b2976fb92-var-lock\") pod \"bfabb495-1707-4c3d-b00e-2f3b2976fb92\" (UID: \"bfabb495-1707-4c3d-b00e-2f3b2976fb92\") " Mar 13 12:40:14.419665 master-0 kubenswrapper[7518]: I0313 12:40:14.419454 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3828446d-a3e3-412f-a0e7-7347b5de523a-var-lock\") pod \"3828446d-a3e3-412f-a0e7-7347b5de523a\" (UID: \"3828446d-a3e3-412f-a0e7-7347b5de523a\") " Mar 13 12:40:14.419665 master-0 kubenswrapper[7518]: I0313 12:40:14.419504 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfabb495-1707-4c3d-b00e-2f3b2976fb92-kubelet-dir\") pod \"bfabb495-1707-4c3d-b00e-2f3b2976fb92\" (UID: \"bfabb495-1707-4c3d-b00e-2f3b2976fb92\") " Mar 13 12:40:14.419665 master-0 kubenswrapper[7518]: I0313 12:40:14.419636 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfabb495-1707-4c3d-b00e-2f3b2976fb92-kube-api-access\") pod \"bfabb495-1707-4c3d-b00e-2f3b2976fb92\" (UID: \"bfabb495-1707-4c3d-b00e-2f3b2976fb92\") " Mar 13 12:40:14.419913 master-0 kubenswrapper[7518]: I0313 12:40:14.419690 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3828446d-a3e3-412f-a0e7-7347b5de523a-kube-api-access\") pod \"3828446d-a3e3-412f-a0e7-7347b5de523a\" (UID: \"3828446d-a3e3-412f-a0e7-7347b5de523a\") " Mar 13 12:40:14.420887 master-0 kubenswrapper[7518]: I0313 12:40:14.420795 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3828446d-a3e3-412f-a0e7-7347b5de523a-var-lock" (OuterVolumeSpecName: "var-lock") pod "3828446d-a3e3-412f-a0e7-7347b5de523a" (UID: "3828446d-a3e3-412f-a0e7-7347b5de523a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:14.421052 master-0 kubenswrapper[7518]: I0313 12:40:14.420906 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfabb495-1707-4c3d-b00e-2f3b2976fb92-var-lock" (OuterVolumeSpecName: "var-lock") pod "bfabb495-1707-4c3d-b00e-2f3b2976fb92" (UID: "bfabb495-1707-4c3d-b00e-2f3b2976fb92"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:14.421052 master-0 kubenswrapper[7518]: I0313 12:40:14.420945 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfabb495-1707-4c3d-b00e-2f3b2976fb92-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bfabb495-1707-4c3d-b00e-2f3b2976fb92" (UID: "bfabb495-1707-4c3d-b00e-2f3b2976fb92"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:14.421052 master-0 kubenswrapper[7518]: I0313 12:40:14.420964 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3828446d-a3e3-412f-a0e7-7347b5de523a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3828446d-a3e3-412f-a0e7-7347b5de523a" (UID: "3828446d-a3e3-412f-a0e7-7347b5de523a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:14.424531 master-0 kubenswrapper[7518]: I0313 12:40:14.424463 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3828446d-a3e3-412f-a0e7-7347b5de523a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3828446d-a3e3-412f-a0e7-7347b5de523a" (UID: "3828446d-a3e3-412f-a0e7-7347b5de523a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:40:14.426735 master-0 kubenswrapper[7518]: I0313 12:40:14.426663 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfabb495-1707-4c3d-b00e-2f3b2976fb92-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bfabb495-1707-4c3d-b00e-2f3b2976fb92" (UID: "bfabb495-1707-4c3d-b00e-2f3b2976fb92"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:40:14.521040 master-0 kubenswrapper[7518]: I0313 12:40:14.520978 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfabb495-1707-4c3d-b00e-2f3b2976fb92-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:14.521040 master-0 kubenswrapper[7518]: I0313 12:40:14.521032 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3828446d-a3e3-412f-a0e7-7347b5de523a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:14.521040 master-0 kubenswrapper[7518]: I0313 12:40:14.521050 7518 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3828446d-a3e3-412f-a0e7-7347b5de523a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:14.521389 master-0 kubenswrapper[7518]: I0313 12:40:14.521065 7518 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bfabb495-1707-4c3d-b00e-2f3b2976fb92-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:14.521389 master-0 kubenswrapper[7518]: I0313 12:40:14.521080 7518 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3828446d-a3e3-412f-a0e7-7347b5de523a-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:14.521389 master-0 kubenswrapper[7518]: I0313 12:40:14.521094 7518 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfabb495-1707-4c3d-b00e-2f3b2976fb92-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:14.783080 master-0 kubenswrapper[7518]: I0313 12:40:14.782993 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-rfqb9_d5a19b80-d488-46d3-a4a8-0b80361077e1/olm-operator/0.log" Mar 13 12:40:14.797434 master-0 kubenswrapper[7518]: I0313 12:40:14.797366 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_88bf0bf8-c0ee-454e-8d8b-592a6e796cfc/installer/0.log" Mar 13 12:40:14.797776 master-0 kubenswrapper[7518]: I0313 12:40:14.797739 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:40:14.801074 master-0 kubenswrapper[7518]: I0313 12:40:14.801028 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-tlnkd_10944f9c-8ce9-44e6-9c36-a0ea19d8cae3/catalog-operator/0.log" Mar 13 12:40:14.805903 master-0 kubenswrapper[7518]: I0313 12:40:14.805870 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-kh6n9_4dd0fc2f-f2ee-4447-a747-04a178288cf0/network-operator/0.log" Mar 13 12:40:14.810077 master-0 kubenswrapper[7518]: I0313 12:40:14.810038 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-qg8q5_1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/approver/0.log" Mar 13 12:40:14.811875 master-0 kubenswrapper[7518]: I0313 12:40:14.811839 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_3828446d-a3e3-412f-a0e7-7347b5de523a/installer/0.log" Mar 13 12:40:14.811985 master-0 kubenswrapper[7518]: I0313 12:40:14.811960 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:40:14.813909 master-0 kubenswrapper[7518]: I0313 12:40:14.813877 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_bfabb495-1707-4c3d-b00e-2f3b2976fb92/installer/0.log" Mar 13 12:40:14.813987 master-0 kubenswrapper[7518]: I0313 12:40:14.813968 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:40:15.673557 master-0 kubenswrapper[7518]: E0313 12:40:15.673418 7518 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cluster-monitoring-operator-674cbfbd9d-zwtdz.189c66eb95269ce8 openshift-monitoring 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:cluster-monitoring-operator-674cbfbd9d-zwtdz,UID:604456a0-4997-43bc-87ef-283a002111fe,APIVersion:v1,ResourceVersion:3566,FieldPath:spec.containers{cluster-monitoring-operator},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\" in 3.67s (3.67s including waiting). Image size: 484450382 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:38:35.282816232 +0000 UTC m=+69.915885409,LastTimestamp:2026-03-13 12:38:35.282816232 +0000 UTC m=+69.915885409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:40:17.831851 master-0 kubenswrapper[7518]: I0313 12:40:17.831764 7518 generic.go:334] "Generic (PLEG): container finished" podID="d3d998ee-b26f-4e30-83bc-f94f8c68060a" containerID="2678ae1f026392d01bc32426edbdfbe31df6907392fe5e29e35b3e44ffb8f896" exitCode=0 Mar 13 12:40:18.616012 master-0 kubenswrapper[7518]: E0313 12:40:18.615893 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:40:18.721477 master-0 kubenswrapper[7518]: I0313 12:40:18.721388 7518 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-7qhr4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" start-of-body= Mar 13 12:40:18.721477 master-0 kubenswrapper[7518]: I0313 12:40:18.721415 7518 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-7qhr4 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" start-of-body= Mar 13 12:40:18.721477 master-0 kubenswrapper[7518]: I0313 12:40:18.721463 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" podUID="d3d998ee-b26f-4e30-83bc-f94f8c68060a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" Mar 13 12:40:18.721917 master-0 kubenswrapper[7518]: I0313 12:40:18.721480 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" podUID="d3d998ee-b26f-4e30-83bc-f94f8c68060a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" Mar 13 12:40:19.846807 master-0 kubenswrapper[7518]: I0313 12:40:19.846665 7518 generic.go:334] "Generic (PLEG): container finished" podID="089cfabc-9d3d-4260-bb16-8b5eaf73b3fa" containerID="814a1adb650838a7837cee0a591e9eba8984a73367ffe7b1b579ae47de6fda2a" exitCode=0 Mar 13 12:40:27.163390 master-0 kubenswrapper[7518]: E0313 12:40:27.157193 7518 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.531s" Mar 13 12:40:27.163390 master-0 kubenswrapper[7518]: I0313 12:40:27.157286 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:40:27.163390 master-0 kubenswrapper[7518]: I0313 12:40:27.157303 7518 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:40:27.163390 master-0 kubenswrapper[7518]: I0313 12:40:27.157346 7518 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:40:27.163390 master-0 kubenswrapper[7518]: I0313 12:40:27.157355 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"3828446d-a3e3-412f-a0e7-7347b5de523a","Type":"ContainerDied","Data":"aa6c714b8707274c998afed0944ecc8600d7bc24a6b08415b6cbef112b436b47"} Mar 13 12:40:27.163390 master-0 kubenswrapper[7518]: I0313 12:40:27.157379 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" event={"ID":"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a","Type":"ContainerDied","Data":"13a298fff8d915caaf89a785573e9b3488b88852d2c326a75e61c523b3cd60a0"} Mar 13 12:40:27.163390 master-0 kubenswrapper[7518]: I0313 12:40:27.157399 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:40:27.163390 master-0 kubenswrapper[7518]: I0313 12:40:27.157479 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:40:27.163390 master-0 kubenswrapper[7518]: I0313 12:40:27.157488 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" event={"ID":"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a","Type":"ContainerStarted","Data":"dc8ec1aed61fa783f1383f45771cb4136de885100e0460aa1df476073926f5af"} Mar 13 12:40:27.163390 master-0 kubenswrapper[7518]: I0313 12:40:27.157498 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:40:27.163390 master-0 kubenswrapper[7518]: I0313 12:40:27.157510 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"a24502cdbf57f3af530c16b279d90e04b37d8116542797b27db1c42bb0ece279"} Mar 13 12:40:27.163390 master-0 kubenswrapper[7518]: I0313 12:40:27.157650 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:40:27.163390 master-0 kubenswrapper[7518]: I0313 12:40:27.157701 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" event={"ID":"0da84bb7-e936-49a0-96b5-614a1305d6a4","Type":"ContainerDied","Data":"7049109a836522af070e6bb63ef4a03a6cf57954c7a7d1ea2471e59144150127"} Mar 13 12:40:27.163390 master-0 kubenswrapper[7518]: I0313 12:40:27.159857 7518 scope.go:117] "RemoveContainer" containerID="ac30e49a3ae0e3ef59ed9c3728ae1c26bf004ec3b0fe4cf00ec315598faa9cf4" Mar 13 12:40:27.171673 master-0 kubenswrapper[7518]: I0313 12:40:27.171432 7518 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 12:40:27.179922 master-0 kubenswrapper[7518]: I0313 12:40:27.179837 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 12:40:27.179922 master-0 kubenswrapper[7518]: I0313 12:40:27.179929 7518 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="32270fd7-570b-4204-95d5-2a6c512c39a7" Mar 13 12:40:27.180108 master-0 kubenswrapper[7518]: I0313 12:40:27.179969 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 12:40:27.180108 master-0 kubenswrapper[7518]: I0313 12:40:27.179987 7518 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="32270fd7-570b-4204-95d5-2a6c512c39a7" Mar 13 12:40:27.180108 master-0 kubenswrapper[7518]: I0313 12:40:27.180005 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" event={"ID":"77ef7e49-eb85-4f5e-94d3-a6a8619a6243","Type":"ContainerDied","Data":"3add725e66228351c75651bb4a7357a39de488d2f8d517621841a317712aba3a"} Mar 13 12:40:27.180108 master-0 kubenswrapper[7518]: I0313 12:40:27.180032 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:40:27.180108 master-0 kubenswrapper[7518]: I0313 12:40:27.180075 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:40:27.180108 master-0 kubenswrapper[7518]: I0313 12:40:27.180090 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" event={"ID":"4dd0fc2f-f2ee-4447-a747-04a178288cf0","Type":"ContainerDied","Data":"638f7edbf4d5a7bd9c1277ff74b0deabee140db71794ce849e8ed2fe8e2bdb95"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180114 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" event={"ID":"f5775266-5e58-44ed-81cb-dfe3faf38add","Type":"ContainerDied","Data":"b93548b4b4252ac17adfb04acbab06411e860b90fed7b1160d6dcde46321cd0a"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180155 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"dd5289c2e065c63e076ef785f5c91f68426de016a332635418487df625eabea4"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180171 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"440a66755130132d56907a45f85ff201a5b883c75b1e482675b4125de5018dda"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180185 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"acda021c5f7e7aff55c971e32cec50e25aa40113e66a45d15899959c993261a0"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180197 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"38c2e3b0f262510b515ee410dfc31f716307a3e0807eb4b5d3d5cc8d3c3c5ced"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180208 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"1ca6e8ace45a17d5da03ca182f6c2cf352582d5bc5b5d835a63329d11f8a8397"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180218 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" event={"ID":"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118","Type":"ContainerDied","Data":"69f6736e401004be8e5844a5f9b7891b28a4228a05eb13fc36ff3b64b8740138"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180232 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" event={"ID":"034aaf8e-95df-4171-bae4-e7abe58d15f7","Type":"ContainerDied","Data":"c27448fad258056de304ba3c30b9268468cc1c542046d6c37c21797efa146b54"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180249 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" event={"ID":"887d261f-d07f-4ef0-a230-6568f47acf4d","Type":"ContainerDied","Data":"ac30e49a3ae0e3ef59ed9c3728ae1c26bf004ec3b0fe4cf00ec315598faa9cf4"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180262 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" event={"ID":"15b592d6-3c48-45d4-9172-d28632ae8995","Type":"ContainerDied","Data":"c3cc4d20a3385510f2813df129cea65d1b836444e4586b47995a2d6b48933eba"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180275 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-qg8q5" event={"ID":"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182","Type":"ContainerDied","Data":"8c3d9fdbcfd0987b6eb3f7869d1d1d034470ad27e956a473bf9fb468daecb5e8"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180287 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" event={"ID":"d5a19b80-d488-46d3-a4a8-0b80361077e1","Type":"ContainerDied","Data":"47e1707cfebdcd64e29e4d18bf48d4efe18567479faf12290a7bcd51f3b4d7e2"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180308 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" event={"ID":"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3","Type":"ContainerDied","Data":"9d05f0d44d2a573355b6b4eea02a702f641e31e420669a5e155b6a442793e880"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180321 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"bfabb495-1707-4c3d-b00e-2f3b2976fb92","Type":"ContainerDied","Data":"d8cf37e4c8a527d04eff5203f40779f993e328715e0f8f8ef7b2ff90bad966cf"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180333 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"696192325e102818ab8863a16ab52b3671d6dc3f225d1e0faf06a32633060bda"} Mar 13 12:40:27.180339 master-0 kubenswrapper[7518]: I0313 12:40:27.180352 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" event={"ID":"f5775266-5e58-44ed-81cb-dfe3faf38add","Type":"ContainerStarted","Data":"e24974d7562637f30c354afb27ef4179bd234226ab89ce7552570f69e7ee23e6"} Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180366 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"c81c73f91c343cb4e577286cc40722c4e25a55cd8b94b5a421c1eff5fabb3c61"} Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180381 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" event={"ID":"034aaf8e-95df-4171-bae4-e7abe58d15f7","Type":"ContainerStarted","Data":"6a3d66ed3fc6a1fb717a2b2977fa5c6231d315f07c1d90d364eea56e7a5d7c86"} Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180393 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" event={"ID":"15b592d6-3c48-45d4-9172-d28632ae8995","Type":"ContainerStarted","Data":"5d11669c933e022e2eb1221b72c8dfc83094667fb6b7c0cba300ddb5b306a9d7"} Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180519 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" event={"ID":"d5a19b80-d488-46d3-a4a8-0b80361077e1","Type":"ContainerStarted","Data":"f0c5b3dc40b8343911cfdf5960a0b2222b935ca610668092d6c826c6950bc761"} Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180539 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" event={"ID":"77ef7e49-eb85-4f5e-94d3-a6a8619a6243","Type":"ContainerStarted","Data":"7889aaca2c1700074c08311c90db5e3d58c4df89e1b20e40fd33b13131a20557"} Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180556 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"88bf0bf8-c0ee-454e-8d8b-592a6e796cfc","Type":"ContainerDied","Data":"cbb3fd1b1972cab7aabe9a34a316fc6619100acdef1d341abf069e3ac4eab0ff"} Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180571 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbb3fd1b1972cab7aabe9a34a316fc6619100acdef1d341abf069e3ac4eab0ff" Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180660 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" event={"ID":"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3","Type":"ContainerStarted","Data":"6828090d49f9735dd2442fd56a0db3f01a6e5cb451ce737e8263267c628730cb"} Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180677 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" event={"ID":"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118","Type":"ContainerStarted","Data":"1cb4a8f4af6ffda12180066b17581cdbec2094bd2b21740cda6857348f632207"} Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180691 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" event={"ID":"4dd0fc2f-f2ee-4447-a747-04a178288cf0","Type":"ContainerStarted","Data":"bc5551e07868e81855eed958b9e358bd0715e00cec588a7af2b93942471edb38"} Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180702 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" event={"ID":"0da84bb7-e936-49a0-96b5-614a1305d6a4","Type":"ContainerStarted","Data":"e0b901efadc576656657aa4dea0a09b5c987c11cdc88e24aaeef0848d60cd3b7"} Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180713 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-qg8q5" event={"ID":"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182","Type":"ContainerStarted","Data":"b91c079b382f32d02d029d00309dfc5b4425807a136542a6d176792b503d743b"} Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180723 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"3828446d-a3e3-412f-a0e7-7347b5de523a","Type":"ContainerDied","Data":"504639ecf4788ce4c267fd64fb378348d1c51285c4c07623bf66e15e61133a68"} Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180736 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="504639ecf4788ce4c267fd64fb378348d1c51285c4c07623bf66e15e61133a68" Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180744 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"bfabb495-1707-4c3d-b00e-2f3b2976fb92","Type":"ContainerDied","Data":"8de98c25946553f78d0d15d3d39442b1f1f340c231f6a8d5c64835e897795dde"} Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180757 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8de98c25946553f78d0d15d3d39442b1f1f340c231f6a8d5c64835e897795dde" Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180764 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" event={"ID":"d3d998ee-b26f-4e30-83bc-f94f8c68060a","Type":"ContainerDied","Data":"2678ae1f026392d01bc32426edbdfbe31df6907392fe5e29e35b3e44ffb8f896"} Mar 13 12:40:27.180873 master-0 kubenswrapper[7518]: I0313 12:40:27.180779 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" event={"ID":"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa","Type":"ContainerDied","Data":"814a1adb650838a7837cee0a591e9eba8984a73367ffe7b1b579ae47de6fda2a"} Mar 13 12:40:27.181507 master-0 kubenswrapper[7518]: I0313 12:40:27.181203 7518 scope.go:117] "RemoveContainer" containerID="814a1adb650838a7837cee0a591e9eba8984a73367ffe7b1b579ae47de6fda2a" Mar 13 12:40:27.184258 master-0 kubenswrapper[7518]: I0313 12:40:27.184191 7518 scope.go:117] "RemoveContainer" containerID="5f035fb00c2f1c52dbc78fa55ac7bc8d27c14c42f3da11b968e1fb6e88e80856" Mar 13 12:40:27.185438 master-0 kubenswrapper[7518]: I0313 12:40:27.185177 7518 scope.go:117] "RemoveContainer" containerID="2678ae1f026392d01bc32426edbdfbe31df6907392fe5e29e35b3e44ffb8f896" Mar 13 12:40:27.187860 master-0 kubenswrapper[7518]: I0313 12:40:27.187823 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 12:40:27.713751 master-0 kubenswrapper[7518]: I0313 12:40:27.713617 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.713587555 podStartE2EDuration="713.587555ms" podCreationTimestamp="2026-03-13 12:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:40:27.712009406 +0000 UTC m=+182.345078613" watchObservedRunningTime="2026-03-13 12:40:27.713587555 +0000 UTC m=+182.346656742" Mar 13 12:40:27.891523 master-0 kubenswrapper[7518]: I0313 12:40:27.891485 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" event={"ID":"d3d998ee-b26f-4e30-83bc-f94f8c68060a","Type":"ContainerStarted","Data":"de5f0e7cf4aa65e15644e5e3e9b797e70ca19a364211733911306a2f1e0bcffe"} Mar 13 12:40:27.892517 master-0 kubenswrapper[7518]: I0313 12:40:27.892481 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:40:27.892822 master-0 kubenswrapper[7518]: I0313 12:40:27.892695 7518 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-7qhr4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" start-of-body= Mar 13 12:40:27.892822 master-0 kubenswrapper[7518]: I0313 12:40:27.892732 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" podUID="d3d998ee-b26f-4e30-83bc-f94f8c68060a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" Mar 13 12:40:27.896667 master-0 kubenswrapper[7518]: I0313 12:40:27.896473 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/0.log" Mar 13 12:40:27.896667 master-0 kubenswrapper[7518]: I0313 12:40:27.896514 7518 generic.go:334] "Generic (PLEG): container finished" podID="2f79578c-bbfb-4968-893a-730deb4c01f9" containerID="6a8c75c694096fc8dedc129901064fbff36d84f9daf7b91e5a68c2b191c60f00" exitCode=1 Mar 13 12:40:27.896667 master-0 kubenswrapper[7518]: I0313 12:40:27.896556 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" event={"ID":"2f79578c-bbfb-4968-893a-730deb4c01f9","Type":"ContainerDied","Data":"6a8c75c694096fc8dedc129901064fbff36d84f9daf7b91e5a68c2b191c60f00"} Mar 13 12:40:27.896974 master-0 kubenswrapper[7518]: I0313 12:40:27.896947 7518 scope.go:117] "RemoveContainer" containerID="6a8c75c694096fc8dedc129901064fbff36d84f9daf7b91e5a68c2b191c60f00" Mar 13 12:40:27.905334 master-0 kubenswrapper[7518]: I0313 12:40:27.905293 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" event={"ID":"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa","Type":"ContainerStarted","Data":"13abf0479b13298ab465c691e26a5f91f167723c1dfd38a5ddfba43b7407cce4"} Mar 13 12:40:27.907792 master-0 kubenswrapper[7518]: I0313 12:40:27.907751 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" event={"ID":"887d261f-d07f-4ef0-a230-6568f47acf4d","Type":"ContainerStarted","Data":"00465e73a43059afd9fa8253cf1516e43f3dc83a0fb68f4a65f1a8f78f218e43"} Mar 13 12:40:27.909182 master-0 kubenswrapper[7518]: I0313 12:40:27.909163 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:40:27.923867 master-0 kubenswrapper[7518]: E0313 12:40:27.923827 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 13 12:40:27.927261 master-0 kubenswrapper[7518]: I0313 12:40:27.927204 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:40:28.616930 master-0 kubenswrapper[7518]: E0313 12:40:28.616871 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:40:28.616930 master-0 kubenswrapper[7518]: E0313 12:40:28.616914 7518 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 12:40:28.723405 master-0 kubenswrapper[7518]: I0313 12:40:28.722769 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:40:28.917880 master-0 kubenswrapper[7518]: I0313 12:40:28.917763 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/0.log" Mar 13 12:40:28.918401 master-0 kubenswrapper[7518]: I0313 12:40:28.918368 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" event={"ID":"2f79578c-bbfb-4968-893a-730deb4c01f9","Type":"ContainerStarted","Data":"c36bf45a4804fa4ca98a882a198414395cc18ce172e9fe0b2eeeacf2ec4ae9ef"} Mar 13 12:40:31.713590 master-0 kubenswrapper[7518]: I0313 12:40:31.713455 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 13 12:40:36.716224 master-0 kubenswrapper[7518]: I0313 12:40:36.715969 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 13 12:40:36.745557 master-0 kubenswrapper[7518]: I0313 12:40:36.745513 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 13 12:40:37.043208 master-0 kubenswrapper[7518]: I0313 12:40:37.042506 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 13 12:40:37.498915 master-0 kubenswrapper[7518]: I0313 12:40:37.498789 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9x9vk"] Mar 13 12:40:37.499369 master-0 kubenswrapper[7518]: E0313 12:40:37.499340 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88bf0bf8-c0ee-454e-8d8b-592a6e796cfc" containerName="installer" Mar 13 12:40:37.499369 master-0 kubenswrapper[7518]: I0313 12:40:37.499369 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="88bf0bf8-c0ee-454e-8d8b-592a6e796cfc" containerName="installer" Mar 13 12:40:37.499441 master-0 kubenswrapper[7518]: E0313 12:40:37.499381 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3828446d-a3e3-412f-a0e7-7347b5de523a" containerName="installer" Mar 13 12:40:37.499441 master-0 kubenswrapper[7518]: I0313 12:40:37.499388 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="3828446d-a3e3-412f-a0e7-7347b5de523a" containerName="installer" Mar 13 12:40:37.499441 master-0 kubenswrapper[7518]: E0313 12:40:37.499401 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d2e134-62bb-4181-aa0a-22c9b9755b10" containerName="installer" Mar 13 12:40:37.499441 master-0 kubenswrapper[7518]: I0313 12:40:37.499408 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d2e134-62bb-4181-aa0a-22c9b9755b10" containerName="installer" Mar 13 12:40:37.499441 master-0 kubenswrapper[7518]: E0313 12:40:37.499420 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfabb495-1707-4c3d-b00e-2f3b2976fb92" containerName="installer" Mar 13 12:40:37.499441 master-0 kubenswrapper[7518]: I0313 12:40:37.499426 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfabb495-1707-4c3d-b00e-2f3b2976fb92" containerName="installer" Mar 13 12:40:37.499628 master-0 kubenswrapper[7518]: I0313 12:40:37.499556 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="3828446d-a3e3-412f-a0e7-7347b5de523a" containerName="installer" Mar 13 12:40:37.499628 master-0 kubenswrapper[7518]: I0313 12:40:37.499571 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfabb495-1707-4c3d-b00e-2f3b2976fb92" containerName="installer" Mar 13 12:40:37.499628 master-0 kubenswrapper[7518]: I0313 12:40:37.499582 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="88bf0bf8-c0ee-454e-8d8b-592a6e796cfc" containerName="installer" Mar 13 12:40:37.499628 master-0 kubenswrapper[7518]: I0313 12:40:37.499593 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d2e134-62bb-4181-aa0a-22c9b9755b10" containerName="installer" Mar 13 12:40:37.502198 master-0 kubenswrapper[7518]: I0313 12:40:37.500369 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:40:37.502198 master-0 kubenswrapper[7518]: I0313 12:40:37.501808 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p9csk"] Mar 13 12:40:37.504025 master-0 kubenswrapper[7518]: I0313 12:40:37.503982 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:40:37.507052 master-0 kubenswrapper[7518]: I0313 12:40:37.507021 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-5ddms" Mar 13 12:40:37.507306 master-0 kubenswrapper[7518]: I0313 12:40:37.507287 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-m9n95" Mar 13 12:40:37.516641 master-0 kubenswrapper[7518]: I0313 12:40:37.516597 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9csk"] Mar 13 12:40:37.518539 master-0 kubenswrapper[7518]: I0313 12:40:37.518474 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9x9vk"] Mar 13 12:40:37.619701 master-0 kubenswrapper[7518]: I0313 12:40:37.619655 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr9gm\" (UniqueName: \"kubernetes.io/projected/4f9e6618-62b5-4181-b545-211461811140-kube-api-access-tr9gm\") pod \"community-operators-9x9vk\" (UID: \"4f9e6618-62b5-4181-b545-211461811140\") " pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:40:37.619943 master-0 kubenswrapper[7518]: I0313 12:40:37.619720 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kxx9\" (UniqueName: \"kubernetes.io/projected/1cf388b6-e4a7-41db-a350-1b503214efd3-kube-api-access-9kxx9\") pod \"certified-operators-p9csk\" (UID: \"1cf388b6-e4a7-41db-a350-1b503214efd3\") " pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:40:37.619943 master-0 kubenswrapper[7518]: I0313 12:40:37.619762 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cf388b6-e4a7-41db-a350-1b503214efd3-utilities\") pod \"certified-operators-p9csk\" (UID: \"1cf388b6-e4a7-41db-a350-1b503214efd3\") " pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:40:37.619943 master-0 kubenswrapper[7518]: I0313 12:40:37.619806 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f9e6618-62b5-4181-b545-211461811140-utilities\") pod \"community-operators-9x9vk\" (UID: \"4f9e6618-62b5-4181-b545-211461811140\") " pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:40:37.619943 master-0 kubenswrapper[7518]: I0313 12:40:37.619834 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cf388b6-e4a7-41db-a350-1b503214efd3-catalog-content\") pod \"certified-operators-p9csk\" (UID: \"1cf388b6-e4a7-41db-a350-1b503214efd3\") " pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:40:37.619943 master-0 kubenswrapper[7518]: I0313 12:40:37.619862 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f9e6618-62b5-4181-b545-211461811140-catalog-content\") pod \"community-operators-9x9vk\" (UID: \"4f9e6618-62b5-4181-b545-211461811140\") " pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:40:37.665849 master-0 kubenswrapper[7518]: I0313 12:40:37.665776 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zh888"] Mar 13 12:40:37.667254 master-0 kubenswrapper[7518]: I0313 12:40:37.667223 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:40:37.672666 master-0 kubenswrapper[7518]: I0313 12:40:37.672627 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-bb7kx" Mar 13 12:40:37.677161 master-0 kubenswrapper[7518]: I0313 12:40:37.677065 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zh888"] Mar 13 12:40:37.720660 master-0 kubenswrapper[7518]: I0313 12:40:37.720605 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr9gm\" (UniqueName: \"kubernetes.io/projected/4f9e6618-62b5-4181-b545-211461811140-kube-api-access-tr9gm\") pod \"community-operators-9x9vk\" (UID: \"4f9e6618-62b5-4181-b545-211461811140\") " pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:40:37.721099 master-0 kubenswrapper[7518]: I0313 12:40:37.720662 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kxx9\" (UniqueName: \"kubernetes.io/projected/1cf388b6-e4a7-41db-a350-1b503214efd3-kube-api-access-9kxx9\") pod \"certified-operators-p9csk\" (UID: \"1cf388b6-e4a7-41db-a350-1b503214efd3\") " pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:40:37.721099 master-0 kubenswrapper[7518]: I0313 12:40:37.720849 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cf388b6-e4a7-41db-a350-1b503214efd3-utilities\") pod \"certified-operators-p9csk\" (UID: \"1cf388b6-e4a7-41db-a350-1b503214efd3\") " pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:40:37.721099 master-0 kubenswrapper[7518]: I0313 12:40:37.720924 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f9e6618-62b5-4181-b545-211461811140-utilities\") pod \"community-operators-9x9vk\" (UID: \"4f9e6618-62b5-4181-b545-211461811140\") " pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:40:37.721099 master-0 kubenswrapper[7518]: I0313 12:40:37.720996 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cf388b6-e4a7-41db-a350-1b503214efd3-catalog-content\") pod \"certified-operators-p9csk\" (UID: \"1cf388b6-e4a7-41db-a350-1b503214efd3\") " pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:40:37.721248 master-0 kubenswrapper[7518]: I0313 12:40:37.721162 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f9e6618-62b5-4181-b545-211461811140-catalog-content\") pod \"community-operators-9x9vk\" (UID: \"4f9e6618-62b5-4181-b545-211461811140\") " pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:40:37.721470 master-0 kubenswrapper[7518]: I0313 12:40:37.721442 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cf388b6-e4a7-41db-a350-1b503214efd3-utilities\") pod \"certified-operators-p9csk\" (UID: \"1cf388b6-e4a7-41db-a350-1b503214efd3\") " pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:40:37.721560 master-0 kubenswrapper[7518]: I0313 12:40:37.721528 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cf388b6-e4a7-41db-a350-1b503214efd3-catalog-content\") pod \"certified-operators-p9csk\" (UID: \"1cf388b6-e4a7-41db-a350-1b503214efd3\") " pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:40:37.721618 master-0 kubenswrapper[7518]: I0313 12:40:37.721595 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f9e6618-62b5-4181-b545-211461811140-utilities\") pod \"community-operators-9x9vk\" (UID: \"4f9e6618-62b5-4181-b545-211461811140\") " pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:40:37.721813 master-0 kubenswrapper[7518]: I0313 12:40:37.721788 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f9e6618-62b5-4181-b545-211461811140-catalog-content\") pod \"community-operators-9x9vk\" (UID: \"4f9e6618-62b5-4181-b545-211461811140\") " pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:40:37.736343 master-0 kubenswrapper[7518]: I0313 12:40:37.736283 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr9gm\" (UniqueName: \"kubernetes.io/projected/4f9e6618-62b5-4181-b545-211461811140-kube-api-access-tr9gm\") pod \"community-operators-9x9vk\" (UID: \"4f9e6618-62b5-4181-b545-211461811140\") " pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:40:37.736343 master-0 kubenswrapper[7518]: I0313 12:40:37.736316 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kxx9\" (UniqueName: \"kubernetes.io/projected/1cf388b6-e4a7-41db-a350-1b503214efd3-kube-api-access-9kxx9\") pod \"certified-operators-p9csk\" (UID: \"1cf388b6-e4a7-41db-a350-1b503214efd3\") " pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:40:37.821899 master-0 kubenswrapper[7518]: I0313 12:40:37.821834 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cscql\" (UniqueName: \"kubernetes.io/projected/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-kube-api-access-cscql\") pod \"redhat-marketplace-zh888\" (UID: \"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71\") " pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:40:37.821899 master-0 kubenswrapper[7518]: I0313 12:40:37.821903 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-utilities\") pod \"redhat-marketplace-zh888\" (UID: \"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71\") " pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:40:37.822491 master-0 kubenswrapper[7518]: I0313 12:40:37.821924 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-catalog-content\") pod \"redhat-marketplace-zh888\" (UID: \"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71\") " pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:40:37.824962 master-0 kubenswrapper[7518]: I0313 12:40:37.824911 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:40:37.852499 master-0 kubenswrapper[7518]: I0313 12:40:37.852438 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:40:37.924936 master-0 kubenswrapper[7518]: I0313 12:40:37.924529 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cscql\" (UniqueName: \"kubernetes.io/projected/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-kube-api-access-cscql\") pod \"redhat-marketplace-zh888\" (UID: \"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71\") " pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:40:37.924936 master-0 kubenswrapper[7518]: I0313 12:40:37.924595 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-utilities\") pod \"redhat-marketplace-zh888\" (UID: \"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71\") " pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:40:37.924936 master-0 kubenswrapper[7518]: I0313 12:40:37.924617 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-catalog-content\") pod \"redhat-marketplace-zh888\" (UID: \"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71\") " pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:40:37.925310 master-0 kubenswrapper[7518]: I0313 12:40:37.925196 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-utilities\") pod \"redhat-marketplace-zh888\" (UID: \"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71\") " pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:40:37.927259 master-0 kubenswrapper[7518]: I0313 12:40:37.925356 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-catalog-content\") pod \"redhat-marketplace-zh888\" (UID: \"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71\") " pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:40:37.953779 master-0 kubenswrapper[7518]: I0313 12:40:37.953723 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cscql\" (UniqueName: \"kubernetes.io/projected/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-kube-api-access-cscql\") pod \"redhat-marketplace-zh888\" (UID: \"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71\") " pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:40:37.987821 master-0 kubenswrapper[7518]: I0313 12:40:37.986859 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:40:38.284055 master-0 kubenswrapper[7518]: I0313 12:40:38.284010 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9x9vk"] Mar 13 12:40:38.290385 master-0 kubenswrapper[7518]: W0313 12:40:38.290331 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f9e6618_62b5_4181_b545_211461811140.slice/crio-95a0596df1becc3efa730840acdf49174a4f5a349b4eb826cfe7185b3ca3bcfa WatchSource:0}: Error finding container 95a0596df1becc3efa730840acdf49174a4f5a349b4eb826cfe7185b3ca3bcfa: Status 404 returned error can't find the container with id 95a0596df1becc3efa730840acdf49174a4f5a349b4eb826cfe7185b3ca3bcfa Mar 13 12:40:38.419953 master-0 kubenswrapper[7518]: I0313 12:40:38.419893 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9csk"] Mar 13 12:40:38.445395 master-0 kubenswrapper[7518]: I0313 12:40:38.445026 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zh888"] Mar 13 12:40:38.455970 master-0 kubenswrapper[7518]: W0313 12:40:38.455926 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0ce4c51_2b9f_410f_93e5_9c2ff718dd71.slice/crio-31c103b44a6104346bc94bbde90d17a3c1f1dc78c81990683bc98b314baa42f3 WatchSource:0}: Error finding container 31c103b44a6104346bc94bbde90d17a3c1f1dc78c81990683bc98b314baa42f3: Status 404 returned error can't find the container with id 31c103b44a6104346bc94bbde90d17a3c1f1dc78c81990683bc98b314baa42f3 Mar 13 12:40:38.863157 master-0 kubenswrapper[7518]: I0313 12:40:38.863082 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5czx2"] Mar 13 12:40:38.865274 master-0 kubenswrapper[7518]: I0313 12:40:38.865241 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:40:38.867465 master-0 kubenswrapper[7518]: I0313 12:40:38.867349 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-qvxhm" Mar 13 12:40:38.887530 master-0 kubenswrapper[7518]: I0313 12:40:38.887477 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5czx2"] Mar 13 12:40:39.037081 master-0 kubenswrapper[7518]: I0313 12:40:39.037032 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x492\" (UniqueName: \"kubernetes.io/projected/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-kube-api-access-6x492\") pod \"redhat-operators-5czx2\" (UID: \"32fe77f9-082d-491c-b3d0-9c10feaf4a8e\") " pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:40:39.037398 master-0 kubenswrapper[7518]: I0313 12:40:39.037105 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-utilities\") pod \"redhat-operators-5czx2\" (UID: \"32fe77f9-082d-491c-b3d0-9c10feaf4a8e\") " pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:40:39.037398 master-0 kubenswrapper[7518]: I0313 12:40:39.037200 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-catalog-content\") pod \"redhat-operators-5czx2\" (UID: \"32fe77f9-082d-491c-b3d0-9c10feaf4a8e\") " pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:40:39.043905 master-0 kubenswrapper[7518]: I0313 12:40:39.043857 7518 generic.go:334] "Generic (PLEG): container finished" podID="4f9e6618-62b5-4181-b545-211461811140" containerID="3e929dd0246b5ba2e1233ca2d7cf4594e87b4dbf9604555efeef3c1d42856882" exitCode=0 Mar 13 12:40:39.044044 master-0 kubenswrapper[7518]: I0313 12:40:39.043955 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9vk" event={"ID":"4f9e6618-62b5-4181-b545-211461811140","Type":"ContainerDied","Data":"3e929dd0246b5ba2e1233ca2d7cf4594e87b4dbf9604555efeef3c1d42856882"} Mar 13 12:40:39.044044 master-0 kubenswrapper[7518]: I0313 12:40:39.043991 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9vk" event={"ID":"4f9e6618-62b5-4181-b545-211461811140","Type":"ContainerStarted","Data":"95a0596df1becc3efa730840acdf49174a4f5a349b4eb826cfe7185b3ca3bcfa"} Mar 13 12:40:39.047626 master-0 kubenswrapper[7518]: I0313 12:40:39.047578 7518 generic.go:334] "Generic (PLEG): container finished" podID="e0ce4c51-2b9f-410f-93e5-9c2ff718dd71" containerID="c69c6a03bb52efddcf3f1318571834c27a8923b0db98ff09b8b80e6975cede5a" exitCode=0 Mar 13 12:40:39.047709 master-0 kubenswrapper[7518]: I0313 12:40:39.047671 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh888" event={"ID":"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71","Type":"ContainerDied","Data":"c69c6a03bb52efddcf3f1318571834c27a8923b0db98ff09b8b80e6975cede5a"} Mar 13 12:40:39.047709 master-0 kubenswrapper[7518]: I0313 12:40:39.047701 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh888" event={"ID":"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71","Type":"ContainerStarted","Data":"31c103b44a6104346bc94bbde90d17a3c1f1dc78c81990683bc98b314baa42f3"} Mar 13 12:40:39.051959 master-0 kubenswrapper[7518]: I0313 12:40:39.050917 7518 generic.go:334] "Generic (PLEG): container finished" podID="1cf388b6-e4a7-41db-a350-1b503214efd3" containerID="2588acad0fdaa8971f9072ba2c71ab6cb4dcef118394ee3f0eafb7916282bbdf" exitCode=0 Mar 13 12:40:39.051959 master-0 kubenswrapper[7518]: I0313 12:40:39.050962 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9csk" event={"ID":"1cf388b6-e4a7-41db-a350-1b503214efd3","Type":"ContainerDied","Data":"2588acad0fdaa8971f9072ba2c71ab6cb4dcef118394ee3f0eafb7916282bbdf"} Mar 13 12:40:39.051959 master-0 kubenswrapper[7518]: I0313 12:40:39.050987 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9csk" event={"ID":"1cf388b6-e4a7-41db-a350-1b503214efd3","Type":"ContainerStarted","Data":"a0efa1bf3eba5a2ca6c57d7440e21de8f77ce06cd058d6cbb24dd5784e78863f"} Mar 13 12:40:39.145647 master-0 kubenswrapper[7518]: I0313 12:40:39.145576 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x492\" (UniqueName: \"kubernetes.io/projected/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-kube-api-access-6x492\") pod \"redhat-operators-5czx2\" (UID: \"32fe77f9-082d-491c-b3d0-9c10feaf4a8e\") " pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:40:39.145647 master-0 kubenswrapper[7518]: I0313 12:40:39.145636 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-utilities\") pod \"redhat-operators-5czx2\" (UID: \"32fe77f9-082d-491c-b3d0-9c10feaf4a8e\") " pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:40:39.145960 master-0 kubenswrapper[7518]: I0313 12:40:39.145674 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-catalog-content\") pod \"redhat-operators-5czx2\" (UID: \"32fe77f9-082d-491c-b3d0-9c10feaf4a8e\") " pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:40:39.146467 master-0 kubenswrapper[7518]: I0313 12:40:39.146160 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-catalog-content\") pod \"redhat-operators-5czx2\" (UID: \"32fe77f9-082d-491c-b3d0-9c10feaf4a8e\") " pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:40:39.146668 master-0 kubenswrapper[7518]: I0313 12:40:39.146636 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-utilities\") pod \"redhat-operators-5czx2\" (UID: \"32fe77f9-082d-491c-b3d0-9c10feaf4a8e\") " pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:40:39.165595 master-0 kubenswrapper[7518]: I0313 12:40:39.165543 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x492\" (UniqueName: \"kubernetes.io/projected/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-kube-api-access-6x492\") pod \"redhat-operators-5czx2\" (UID: \"32fe77f9-082d-491c-b3d0-9c10feaf4a8e\") " pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:40:39.190842 master-0 kubenswrapper[7518]: I0313 12:40:39.190787 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:40:39.718729 master-0 kubenswrapper[7518]: I0313 12:40:39.718678 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5czx2"] Mar 13 12:40:39.721448 master-0 kubenswrapper[7518]: W0313 12:40:39.721408 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32fe77f9_082d_491c_b3d0_9c10feaf4a8e.slice/crio-06b5a40ca00c0683426a1707f8de8aa68ed5666ea8cb726727703876312ec6d0 WatchSource:0}: Error finding container 06b5a40ca00c0683426a1707f8de8aa68ed5666ea8cb726727703876312ec6d0: Status 404 returned error can't find the container with id 06b5a40ca00c0683426a1707f8de8aa68ed5666ea8cb726727703876312ec6d0 Mar 13 12:40:40.064162 master-0 kubenswrapper[7518]: I0313 12:40:40.059811 7518 generic.go:334] "Generic (PLEG): container finished" podID="32fe77f9-082d-491c-b3d0-9c10feaf4a8e" containerID="1ba7fe014f4219ce7bf848e51ed5c249f92fdeb9d65b7c7dc9ad928634e63414" exitCode=0 Mar 13 12:40:40.064162 master-0 kubenswrapper[7518]: I0313 12:40:40.059872 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5czx2" event={"ID":"32fe77f9-082d-491c-b3d0-9c10feaf4a8e","Type":"ContainerDied","Data":"1ba7fe014f4219ce7bf848e51ed5c249f92fdeb9d65b7c7dc9ad928634e63414"} Mar 13 12:40:40.064162 master-0 kubenswrapper[7518]: I0313 12:40:40.059895 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5czx2" event={"ID":"32fe77f9-082d-491c-b3d0-9c10feaf4a8e","Type":"ContainerStarted","Data":"06b5a40ca00c0683426a1707f8de8aa68ed5666ea8cb726727703876312ec6d0"} Mar 13 12:40:45.416828 master-0 kubenswrapper[7518]: I0313 12:40:45.416343 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng"] Mar 13 12:40:45.417563 master-0 kubenswrapper[7518]: I0313 12:40:45.417217 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:40:45.427082 master-0 kubenswrapper[7518]: I0313 12:40:45.421001 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 12:40:45.427082 master-0 kubenswrapper[7518]: I0313 12:40:45.421301 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 12:40:45.427082 master-0 kubenswrapper[7518]: I0313 12:40:45.421427 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 12:40:45.427082 master-0 kubenswrapper[7518]: I0313 12:40:45.421566 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-lpcnm" Mar 13 12:40:45.427082 master-0 kubenswrapper[7518]: I0313 12:40:45.421694 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 12:40:45.427082 master-0 kubenswrapper[7518]: I0313 12:40:45.421817 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 12:40:45.507286 master-0 kubenswrapper[7518]: I0313 12:40:45.507224 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-vxk8z"] Mar 13 12:40:45.508194 master-0 kubenswrapper[7518]: I0313 12:40:45.508149 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.509278 master-0 kubenswrapper[7518]: I0313 12:40:45.508633 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9"] Mar 13 12:40:45.511917 master-0 kubenswrapper[7518]: I0313 12:40:45.511791 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 13 12:40:45.512384 master-0 kubenswrapper[7518]: I0313 12:40:45.512237 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 13 12:40:45.512678 master-0 kubenswrapper[7518]: I0313 12:40:45.512582 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 13 12:40:45.514061 master-0 kubenswrapper[7518]: I0313 12:40:45.513910 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-tw2tq" Mar 13 12:40:45.515089 master-0 kubenswrapper[7518]: I0313 12:40:45.514253 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 13 12:40:45.515324 master-0 kubenswrapper[7518]: I0313 12:40:45.515285 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 13 12:40:45.520983 master-0 kubenswrapper[7518]: I0313 12:40:45.519294 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.526887 master-0 kubenswrapper[7518]: I0313 12:40:45.526844 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499"] Mar 13 12:40:45.527716 master-0 kubenswrapper[7518]: I0313 12:40:45.527674 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-zpmf6" Mar 13 12:40:45.527919 master-0 kubenswrapper[7518]: I0313 12:40:45.527897 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 12:40:45.529853 master-0 kubenswrapper[7518]: I0313 12:40:45.528058 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:40:45.529853 master-0 kubenswrapper[7518]: I0313 12:40:45.528286 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:40:45.529853 master-0 kubenswrapper[7518]: I0313 12:40:45.528625 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 12:40:45.530113 master-0 kubenswrapper[7518]: I0313 12:40:45.530089 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp"] Mar 13 12:40:45.530451 master-0 kubenswrapper[7518]: I0313 12:40:45.530315 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" Mar 13 12:40:45.531399 master-0 kubenswrapper[7518]: I0313 12:40:45.531359 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" Mar 13 12:40:45.546171 master-0 kubenswrapper[7518]: I0313 12:40:45.546121 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 12:40:45.546357 master-0 kubenswrapper[7518]: I0313 12:40:45.546315 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-8mlcv" Mar 13 12:40:45.546426 master-0 kubenswrapper[7518]: I0313 12:40:45.546414 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-m2z2f" Mar 13 12:40:45.546645 master-0 kubenswrapper[7518]: I0313 12:40:45.546581 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 12:40:45.546738 master-0 kubenswrapper[7518]: I0313 12:40:45.546722 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 12:40:45.546894 master-0 kubenswrapper[7518]: I0313 12:40:45.546864 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 13 12:40:45.547027 master-0 kubenswrapper[7518]: I0313 12:40:45.547013 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 12:40:45.548012 master-0 kubenswrapper[7518]: I0313 12:40:45.547976 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-vxk8z"] Mar 13 12:40:45.584419 master-0 kubenswrapper[7518]: I0313 12:40:45.578753 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c"] Mar 13 12:40:45.604426 master-0 kubenswrapper[7518]: I0313 12:40:45.603176 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:40:45.606974 master-0 kubenswrapper[7518]: I0313 12:40:45.605693 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-fs9mz" Mar 13 12:40:45.606974 master-0 kubenswrapper[7518]: I0313 12:40:45.605926 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 13 12:40:45.606974 master-0 kubenswrapper[7518]: I0313 12:40:45.606040 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 13 12:40:45.606974 master-0 kubenswrapper[7518]: I0313 12:40:45.606269 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 13 12:40:45.606974 master-0 kubenswrapper[7518]: I0313 12:40:45.606321 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 13 12:40:45.655577 master-0 kubenswrapper[7518]: I0313 12:40:45.655480 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c04ee08-4018-4cf3-b257-10aff84fa933-config\") pod \"machine-approver-955fcfb87-bffng\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:40:45.655895 master-0 kubenswrapper[7518]: I0313 12:40:45.655801 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50a2046b-092b-434c-92a2-579f4462c4fb-serving-cert\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.658840 master-0 kubenswrapper[7518]: I0313 12:40:45.655964 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvrc7\" (UniqueName: \"kubernetes.io/projected/d39ee5d7-840e-4481-b0b9-baf34da2c7b1-kube-api-access-rvrc7\") pod \"cluster-samples-operator-664cb58b85-m5499\" (UID: \"d39ee5d7-840e-4481-b0b9-baf34da2c7b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" Mar 13 12:40:45.658840 master-0 kubenswrapper[7518]: I0313 12:40:45.656002 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-9ccr9\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.658840 master-0 kubenswrapper[7518]: I0313 12:40:45.656023 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-9ccr9\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.658840 master-0 kubenswrapper[7518]: I0313 12:40:45.656042 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-service-ca-bundle\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.658840 master-0 kubenswrapper[7518]: I0313 12:40:45.656061 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d39ee5d7-840e-4481-b0b9-baf34da2c7b1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-m5499\" (UID: \"d39ee5d7-840e-4481-b0b9-baf34da2c7b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" Mar 13 12:40:45.658840 master-0 kubenswrapper[7518]: I0313 12:40:45.656100 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-9ccr9\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.658840 master-0 kubenswrapper[7518]: I0313 12:40:45.656121 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-9ccr9\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.658840 master-0 kubenswrapper[7518]: I0313 12:40:45.656154 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/50a2046b-092b-434c-92a2-579f4462c4fb-snapshots\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.658840 master-0 kubenswrapper[7518]: I0313 12:40:45.656177 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnpds\" (UniqueName: \"kubernetes.io/projected/50a2046b-092b-434c-92a2-579f4462c4fb-kube-api-access-mnpds\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.658840 master-0 kubenswrapper[7518]: I0313 12:40:45.656197 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkvzr\" (UniqueName: \"kubernetes.io/projected/9c04ee08-4018-4cf3-b257-10aff84fa933-kube-api-access-lkvzr\") pod \"machine-approver-955fcfb87-bffng\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:40:45.658840 master-0 kubenswrapper[7518]: I0313 12:40:45.656222 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.658840 master-0 kubenswrapper[7518]: I0313 12:40:45.656241 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9c04ee08-4018-4cf3-b257-10aff84fa933-machine-approver-tls\") pod \"machine-approver-955fcfb87-bffng\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:40:45.658840 master-0 kubenswrapper[7518]: I0313 12:40:45.656261 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9c04ee08-4018-4cf3-b257-10aff84fa933-auth-proxy-config\") pod \"machine-approver-955fcfb87-bffng\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:40:45.675899 master-0 kubenswrapper[7518]: I0313 12:40:45.675790 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499"] Mar 13 12:40:45.676088 master-0 kubenswrapper[7518]: I0313 12:40:45.676072 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp"] Mar 13 12:40:45.676548 master-0 kubenswrapper[7518]: I0313 12:40:45.676535 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c"] Mar 13 12:40:45.698397 master-0 kubenswrapper[7518]: I0313 12:40:45.696872 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w"] Mar 13 12:40:45.698397 master-0 kubenswrapper[7518]: I0313 12:40:45.697584 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" Mar 13 12:40:45.698397 master-0 kubenswrapper[7518]: I0313 12:40:45.698228 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5"] Mar 13 12:40:45.709480 master-0 kubenswrapper[7518]: I0313 12:40:45.698891 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:45.709480 master-0 kubenswrapper[7518]: I0313 12:40:45.700497 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 12:40:45.709480 master-0 kubenswrapper[7518]: I0313 12:40:45.701829 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-99fzl" Mar 13 12:40:45.709480 master-0 kubenswrapper[7518]: I0313 12:40:45.701970 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz"] Mar 13 12:40:45.709480 master-0 kubenswrapper[7518]: I0313 12:40:45.702175 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 13 12:40:45.709480 master-0 kubenswrapper[7518]: I0313 12:40:45.702415 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 13 12:40:45.709480 master-0 kubenswrapper[7518]: I0313 12:40:45.702673 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 13 12:40:45.709480 master-0 kubenswrapper[7518]: I0313 12:40:45.702836 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-r2lqd" Mar 13 12:40:45.709480 master-0 kubenswrapper[7518]: I0313 12:40:45.702996 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 12:40:45.709480 master-0 kubenswrapper[7518]: I0313 12:40:45.703162 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 13 12:40:45.709480 master-0 kubenswrapper[7518]: I0313 12:40:45.708763 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:40:45.710288 master-0 kubenswrapper[7518]: I0313 12:40:45.710263 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 12:40:45.716560 master-0 kubenswrapper[7518]: I0313 12:40:45.713935 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w"] Mar 13 12:40:45.722004 master-0 kubenswrapper[7518]: I0313 12:40:45.720173 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-76p65" Mar 13 12:40:45.722004 master-0 kubenswrapper[7518]: I0313 12:40:45.720303 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 12:40:45.722004 master-0 kubenswrapper[7518]: I0313 12:40:45.720452 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 12:40:45.722004 master-0 kubenswrapper[7518]: I0313 12:40:45.720596 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 12:40:45.729922 master-0 kubenswrapper[7518]: I0313 12:40:45.729858 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx"] Mar 13 12:40:45.731852 master-0 kubenswrapper[7518]: I0313 12:40:45.730856 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:40:45.737422 master-0 kubenswrapper[7518]: I0313 12:40:45.736554 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 13 12:40:45.737422 master-0 kubenswrapper[7518]: I0313 12:40:45.736841 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-4jl9c" Mar 13 12:40:45.737422 master-0 kubenswrapper[7518]: I0313 12:40:45.736984 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 13 12:40:45.742349 master-0 kubenswrapper[7518]: I0313 12:40:45.742312 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5"] Mar 13 12:40:45.747551 master-0 kubenswrapper[7518]: I0313 12:40:45.747503 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx"] Mar 13 12:40:45.753849 master-0 kubenswrapper[7518]: I0313 12:40:45.753481 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz"] Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759299 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkvzr\" (UniqueName: \"kubernetes.io/projected/9c04ee08-4018-4cf3-b257-10aff84fa933-kube-api-access-lkvzr\") pod \"machine-approver-955fcfb87-bffng\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759345 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759363 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9c04ee08-4018-4cf3-b257-10aff84fa933-machine-approver-tls\") pod \"machine-approver-955fcfb87-bffng\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759435 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crc29\" (UniqueName: \"kubernetes.io/projected/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-kube-api-access-crc29\") pod \"cluster-cloud-controller-manager-operator-559568b945-9ccr9\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759455 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d44112d1-b2a5-4b8d-b74d-1e91638508d5-cert\") pod \"cluster-autoscaler-operator-69576476f7-sqndx\" (UID: \"d44112d1-b2a5-4b8d-b74d-1e91638508d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759473 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9c04ee08-4018-4cf3-b257-10aff84fa933-auth-proxy-config\") pod \"machine-approver-955fcfb87-bffng\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759494 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdlrq\" (UniqueName: \"kubernetes.io/projected/d44112d1-b2a5-4b8d-b74d-1e91638508d5-kube-api-access-tdlrq\") pod \"cluster-autoscaler-operator-69576476f7-sqndx\" (UID: \"d44112d1-b2a5-4b8d-b74d-1e91638508d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759513 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c04ee08-4018-4cf3-b257-10aff84fa933-config\") pod \"machine-approver-955fcfb87-bffng\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759537 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50a2046b-092b-434c-92a2-579f4462c4fb-serving-cert\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759555 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d44112d1-b2a5-4b8d-b74d-1e91638508d5-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-sqndx\" (UID: \"d44112d1-b2a5-4b8d-b74d-1e91638508d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759578 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7d67915-d31e-46dc-bb2e-1a6f689dd875-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-jhtsp\" (UID: \"d7d67915-d31e-46dc-bb2e-1a6f689dd875\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759600 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvrc7\" (UniqueName: \"kubernetes.io/projected/d39ee5d7-840e-4481-b0b9-baf34da2c7b1-kube-api-access-rvrc7\") pod \"cluster-samples-operator-664cb58b85-m5499\" (UID: \"d39ee5d7-840e-4481-b0b9-baf34da2c7b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759620 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-9ccr9\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759637 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-9ccr9\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759709 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-service-ca-bundle\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.760360 master-0 kubenswrapper[7518]: I0313 12:40:45.759732 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d39ee5d7-840e-4481-b0b9-baf34da2c7b1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-m5499\" (UID: \"d39ee5d7-840e-4481-b0b9-baf34da2c7b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" Mar 13 12:40:45.767531 master-0 kubenswrapper[7518]: I0313 12:40:45.762314 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c04ee08-4018-4cf3-b257-10aff84fa933-config\") pod \"machine-approver-955fcfb87-bffng\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:40:45.767909 master-0 kubenswrapper[7518]: I0313 12:40:45.762778 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9c04ee08-4018-4cf3-b257-10aff84fa933-auth-proxy-config\") pod \"machine-approver-955fcfb87-bffng\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:40:45.768015 master-0 kubenswrapper[7518]: I0313 12:40:45.763512 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-service-ca-bundle\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.771451 master-0 kubenswrapper[7518]: I0313 12:40:45.763539 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp"] Mar 13 12:40:45.778340 master-0 kubenswrapper[7518]: I0313 12:40:45.765366 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.778598 master-0 kubenswrapper[7518]: I0313 12:40:45.766405 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9c04ee08-4018-4cf3-b257-10aff84fa933-machine-approver-tls\") pod \"machine-approver-955fcfb87-bffng\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:40:45.778598 master-0 kubenswrapper[7518]: I0313 12:40:45.767275 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d39ee5d7-840e-4481-b0b9-baf34da2c7b1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-m5499\" (UID: \"d39ee5d7-840e-4481-b0b9-baf34da2c7b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" Mar 13 12:40:45.778598 master-0 kubenswrapper[7518]: I0313 12:40:45.767484 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mddhv\" (UniqueName: \"kubernetes.io/projected/87a5904a-55ca-416f-8aec-57a2b5194c5a-kube-api-access-mddhv\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:40:45.778598 master-0 kubenswrapper[7518]: I0313 12:40:45.778472 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87a5904a-55ca-416f-8aec-57a2b5194c5a-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:40:45.778598 master-0 kubenswrapper[7518]: I0313 12:40:45.778563 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-9ccr9\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.778918 master-0 kubenswrapper[7518]: I0313 12:40:45.778594 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69hws\" (UniqueName: \"kubernetes.io/projected/d7d67915-d31e-46dc-bb2e-1a6f689dd875-kube-api-access-69hws\") pod \"cluster-storage-operator-6fbfc8dc8f-jhtsp\" (UID: \"d7d67915-d31e-46dc-bb2e-1a6f689dd875\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" Mar 13 12:40:45.778918 master-0 kubenswrapper[7518]: I0313 12:40:45.778628 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-9ccr9\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.778918 master-0 kubenswrapper[7518]: I0313 12:40:45.778684 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/87a5904a-55ca-416f-8aec-57a2b5194c5a-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:40:45.778918 master-0 kubenswrapper[7518]: I0313 12:40:45.778718 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/50a2046b-092b-434c-92a2-579f4462c4fb-snapshots\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.778918 master-0 kubenswrapper[7518]: I0313 12:40:45.778743 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnpds\" (UniqueName: \"kubernetes.io/projected/50a2046b-092b-434c-92a2-579f4462c4fb-kube-api-access-mnpds\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.779355 master-0 kubenswrapper[7518]: I0313 12:40:45.779319 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj"] Mar 13 12:40:45.779989 master-0 kubenswrapper[7518]: I0313 12:40:45.779957 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:40:45.780161 master-0 kubenswrapper[7518]: I0313 12:40:45.767660 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50a2046b-092b-434c-92a2-579f4462c4fb-serving-cert\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.781199 master-0 kubenswrapper[7518]: I0313 12:40:45.780273 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:40:45.781697 master-0 kubenswrapper[7518]: I0313 12:40:45.781663 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-9ccr9\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.781773 master-0 kubenswrapper[7518]: I0313 12:40:45.781707 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/50a2046b-092b-434c-92a2-579f4462c4fb-snapshots\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.781773 master-0 kubenswrapper[7518]: I0313 12:40:45.780947 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-9ccr9\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.781773 master-0 kubenswrapper[7518]: I0313 12:40:45.769368 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-9ccr9\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.781773 master-0 kubenswrapper[7518]: I0313 12:40:45.771537 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-9ccr9\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.785911 master-0 kubenswrapper[7518]: I0313 12:40:45.785820 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp"] Mar 13 12:40:45.792749 master-0 kubenswrapper[7518]: I0313 12:40:45.792689 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj"] Mar 13 12:40:45.800715 master-0 kubenswrapper[7518]: I0313 12:40:45.800657 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 12:40:45.800990 master-0 kubenswrapper[7518]: I0313 12:40:45.800948 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 12:40:45.803357 master-0 kubenswrapper[7518]: I0313 12:40:45.803310 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-59mr8" Mar 13 12:40:45.807415 master-0 kubenswrapper[7518]: I0313 12:40:45.807310 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvrc7\" (UniqueName: \"kubernetes.io/projected/d39ee5d7-840e-4481-b0b9-baf34da2c7b1-kube-api-access-rvrc7\") pod \"cluster-samples-operator-664cb58b85-m5499\" (UID: \"d39ee5d7-840e-4481-b0b9-baf34da2c7b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" Mar 13 12:40:45.809358 master-0 kubenswrapper[7518]: I0313 12:40:45.809280 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 12:40:45.809721 master-0 kubenswrapper[7518]: I0313 12:40:45.809698 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-58w8f" Mar 13 12:40:45.809799 master-0 kubenswrapper[7518]: I0313 12:40:45.809697 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 12:40:45.810083 master-0 kubenswrapper[7518]: I0313 12:40:45.810059 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 12:40:45.810592 master-0 kubenswrapper[7518]: I0313 12:40:45.810542 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 12:40:45.819925 master-0 kubenswrapper[7518]: I0313 12:40:45.817447 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkvzr\" (UniqueName: \"kubernetes.io/projected/9c04ee08-4018-4cf3-b257-10aff84fa933-kube-api-access-lkvzr\") pod \"machine-approver-955fcfb87-bffng\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:40:45.859494 master-0 kubenswrapper[7518]: I0313 12:40:45.859424 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnpds\" (UniqueName: \"kubernetes.io/projected/50a2046b-092b-434c-92a2-579f4462c4fb-kube-api-access-mnpds\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.878635 master-0 kubenswrapper[7518]: I0313 12:40:45.878604 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:40:45.884949 master-0 kubenswrapper[7518]: I0313 12:40:45.884876 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f6992fed-b472-4a2d-a376-c5d72aa846d4-tmpfs\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:40:45.884949 master-0 kubenswrapper[7518]: I0313 12:40:45.884940 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-config\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:45.885464 master-0 kubenswrapper[7518]: I0313 12:40:45.884975 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-webhook-cert\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:40:45.885464 master-0 kubenswrapper[7518]: I0313 12:40:45.885396 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-images\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:45.885673 master-0 kubenswrapper[7518]: I0313 12:40:45.885442 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d44112d1-b2a5-4b8d-b74d-1e91638508d5-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-sqndx\" (UID: \"d44112d1-b2a5-4b8d-b74d-1e91638508d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:40:45.885916 master-0 kubenswrapper[7518]: I0313 12:40:45.885694 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkvfp\" (UniqueName: \"kubernetes.io/projected/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-kube-api-access-mkvfp\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:40:45.886453 master-0 kubenswrapper[7518]: I0313 12:40:45.886392 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5f63b6b-990a-444b-a954-d718036f2f6c-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:40:45.886453 master-0 kubenswrapper[7518]: I0313 12:40:45.886446 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7d67915-d31e-46dc-bb2e-1a6f689dd875-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-jhtsp\" (UID: \"d7d67915-d31e-46dc-bb2e-1a6f689dd875\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" Mar 13 12:40:45.886579 master-0 kubenswrapper[7518]: I0313 12:40:45.886476 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5f63b6b-990a-444b-a954-d718036f2f6c-images\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:40:45.886579 master-0 kubenswrapper[7518]: I0313 12:40:45.886525 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mddhv\" (UniqueName: \"kubernetes.io/projected/87a5904a-55ca-416f-8aec-57a2b5194c5a-kube-api-access-mddhv\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:40:45.886579 master-0 kubenswrapper[7518]: I0313 12:40:45.886553 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87a5904a-55ca-416f-8aec-57a2b5194c5a-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:40:45.886579 master-0 kubenswrapper[7518]: I0313 12:40:45.886577 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:45.886783 master-0 kubenswrapper[7518]: I0313 12:40:45.886622 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69hws\" (UniqueName: \"kubernetes.io/projected/d7d67915-d31e-46dc-bb2e-1a6f689dd875-kube-api-access-69hws\") pod \"cluster-storage-operator-6fbfc8dc8f-jhtsp\" (UID: \"d7d67915-d31e-46dc-bb2e-1a6f689dd875\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" Mar 13 12:40:45.886783 master-0 kubenswrapper[7518]: I0313 12:40:45.886650 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-images\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:40:45.886783 master-0 kubenswrapper[7518]: I0313 12:40:45.886674 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/87a5904a-55ca-416f-8aec-57a2b5194c5a-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:40:45.886783 master-0 kubenswrapper[7518]: I0313 12:40:45.886699 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:45.886783 master-0 kubenswrapper[7518]: I0313 12:40:45.886723 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n75n\" (UniqueName: \"kubernetes.io/projected/f6992fed-b472-4a2d-a376-c5d72aa846d4-kube-api-access-4n75n\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:40:45.886783 master-0 kubenswrapper[7518]: I0313 12:40:45.886779 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v66x\" (UniqueName: \"kubernetes.io/projected/317af639-269e-4163-8e24-fcea468b9352-kube-api-access-4v66x\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:45.887023 master-0 kubenswrapper[7518]: I0313 12:40:45.886803 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-apiservice-cert\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:40:45.888607 master-0 kubenswrapper[7518]: I0313 12:40:45.888163 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:40:45.888607 master-0 kubenswrapper[7518]: I0313 12:40:45.888196 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/747659a6-4a1e-43ed-bb8e-36da6e63b5a1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-btz8w\" (UID: \"747659a6-4a1e-43ed-bb8e-36da6e63b5a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" Mar 13 12:40:45.888607 master-0 kubenswrapper[7518]: I0313 12:40:45.888225 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxcvd\" (UniqueName: \"kubernetes.io/projected/747659a6-4a1e-43ed-bb8e-36da6e63b5a1-kube-api-access-qxcvd\") pod \"control-plane-machine-set-operator-6686554ddc-btz8w\" (UID: \"747659a6-4a1e-43ed-bb8e-36da6e63b5a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" Mar 13 12:40:45.888607 master-0 kubenswrapper[7518]: I0313 12:40:45.888270 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crc29\" (UniqueName: \"kubernetes.io/projected/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-kube-api-access-crc29\") pod \"cluster-cloud-controller-manager-operator-559568b945-9ccr9\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.888607 master-0 kubenswrapper[7518]: I0313 12:40:45.888275 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d44112d1-b2a5-4b8d-b74d-1e91638508d5-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-sqndx\" (UID: \"d44112d1-b2a5-4b8d-b74d-1e91638508d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:40:45.888607 master-0 kubenswrapper[7518]: I0313 12:40:45.888287 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw27v\" (UniqueName: \"kubernetes.io/projected/d5f63b6b-990a-444b-a954-d718036f2f6c-kube-api-access-rw27v\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:40:45.888607 master-0 kubenswrapper[7518]: I0313 12:40:45.888371 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d44112d1-b2a5-4b8d-b74d-1e91638508d5-cert\") pod \"cluster-autoscaler-operator-69576476f7-sqndx\" (UID: \"d44112d1-b2a5-4b8d-b74d-1e91638508d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:40:45.888607 master-0 kubenswrapper[7518]: I0313 12:40:45.888432 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5f63b6b-990a-444b-a954-d718036f2f6c-config\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:40:45.889667 master-0 kubenswrapper[7518]: I0313 12:40:45.888642 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdlrq\" (UniqueName: \"kubernetes.io/projected/d44112d1-b2a5-4b8d-b74d-1e91638508d5-kube-api-access-tdlrq\") pod \"cluster-autoscaler-operator-69576476f7-sqndx\" (UID: \"d44112d1-b2a5-4b8d-b74d-1e91638508d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:40:45.889667 master-0 kubenswrapper[7518]: I0313 12:40:45.888664 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:40:45.891715 master-0 kubenswrapper[7518]: I0313 12:40:45.891622 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87a5904a-55ca-416f-8aec-57a2b5194c5a-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:40:45.894664 master-0 kubenswrapper[7518]: I0313 12:40:45.893381 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7d67915-d31e-46dc-bb2e-1a6f689dd875-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-jhtsp\" (UID: \"d7d67915-d31e-46dc-bb2e-1a6f689dd875\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" Mar 13 12:40:45.899033 master-0 kubenswrapper[7518]: I0313 12:40:45.898986 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d44112d1-b2a5-4b8d-b74d-1e91638508d5-cert\") pod \"cluster-autoscaler-operator-69576476f7-sqndx\" (UID: \"d44112d1-b2a5-4b8d-b74d-1e91638508d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:40:45.899570 master-0 kubenswrapper[7518]: I0313 12:40:45.899529 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/87a5904a-55ca-416f-8aec-57a2b5194c5a-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:40:45.907521 master-0 kubenswrapper[7518]: I0313 12:40:45.907070 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mddhv\" (UniqueName: \"kubernetes.io/projected/87a5904a-55ca-416f-8aec-57a2b5194c5a-kube-api-access-mddhv\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:40:45.908252 master-0 kubenswrapper[7518]: I0313 12:40:45.908216 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crc29\" (UniqueName: \"kubernetes.io/projected/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-kube-api-access-crc29\") pod \"cluster-cloud-controller-manager-operator-559568b945-9ccr9\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.910966 master-0 kubenswrapper[7518]: I0313 12:40:45.910903 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdlrq\" (UniqueName: \"kubernetes.io/projected/d44112d1-b2a5-4b8d-b74d-1e91638508d5-kube-api-access-tdlrq\") pod \"cluster-autoscaler-operator-69576476f7-sqndx\" (UID: \"d44112d1-b2a5-4b8d-b74d-1e91638508d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:40:45.928778 master-0 kubenswrapper[7518]: I0313 12:40:45.928656 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69hws\" (UniqueName: \"kubernetes.io/projected/d7d67915-d31e-46dc-bb2e-1a6f689dd875-kube-api-access-69hws\") pod \"cluster-storage-operator-6fbfc8dc8f-jhtsp\" (UID: \"d7d67915-d31e-46dc-bb2e-1a6f689dd875\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" Mar 13 12:40:45.932561 master-0 kubenswrapper[7518]: I0313 12:40:45.932527 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:40:45.969527 master-0 kubenswrapper[7518]: I0313 12:40:45.969082 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:40:45.982875 master-0 kubenswrapper[7518]: W0313 12:40:45.982750 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c9d5dee_f689_4813_8715_39a6d8ef1a7a.slice/crio-fcaed5aa41ed958d76ede9d948f637b929080d2798f592c0ea92d4ab32d2fb01 WatchSource:0}: Error finding container fcaed5aa41ed958d76ede9d948f637b929080d2798f592c0ea92d4ab32d2fb01: Status 404 returned error can't find the container with id fcaed5aa41ed958d76ede9d948f637b929080d2798f592c0ea92d4ab32d2fb01 Mar 13 12:40:45.986735 master-0 kubenswrapper[7518]: I0313 12:40:45.986713 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" Mar 13 12:40:45.990936 master-0 kubenswrapper[7518]: I0313 12:40:45.990424 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-config\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:45.990936 master-0 kubenswrapper[7518]: I0313 12:40:45.990470 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-webhook-cert\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:40:45.990936 master-0 kubenswrapper[7518]: I0313 12:40:45.990490 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-images\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:45.990936 master-0 kubenswrapper[7518]: I0313 12:40:45.990513 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkvfp\" (UniqueName: \"kubernetes.io/projected/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-kube-api-access-mkvfp\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:40:45.990936 master-0 kubenswrapper[7518]: I0313 12:40:45.990532 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5f63b6b-990a-444b-a954-d718036f2f6c-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:40:45.990936 master-0 kubenswrapper[7518]: I0313 12:40:45.990549 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5f63b6b-990a-444b-a954-d718036f2f6c-images\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:40:45.990936 master-0 kubenswrapper[7518]: I0313 12:40:45.990637 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:45.990936 master-0 kubenswrapper[7518]: I0313 12:40:45.990661 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-images\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:40:45.990936 master-0 kubenswrapper[7518]: I0313 12:40:45.990686 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:45.990936 master-0 kubenswrapper[7518]: I0313 12:40:45.990701 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n75n\" (UniqueName: \"kubernetes.io/projected/f6992fed-b472-4a2d-a376-c5d72aa846d4-kube-api-access-4n75n\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:40:45.992340 master-0 kubenswrapper[7518]: I0313 12:40:45.992309 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v66x\" (UniqueName: \"kubernetes.io/projected/317af639-269e-4163-8e24-fcea468b9352-kube-api-access-4v66x\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:45.992416 master-0 kubenswrapper[7518]: I0313 12:40:45.992351 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-apiservice-cert\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:40:45.994163 master-0 kubenswrapper[7518]: I0313 12:40:45.994077 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5f63b6b-990a-444b-a954-d718036f2f6c-images\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:40:45.994294 master-0 kubenswrapper[7518]: I0313 12:40:45.994248 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-config\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:45.994294 master-0 kubenswrapper[7518]: I0313 12:40:45.994276 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:40:45.994372 master-0 kubenswrapper[7518]: I0313 12:40:45.994314 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/747659a6-4a1e-43ed-bb8e-36da6e63b5a1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-btz8w\" (UID: \"747659a6-4a1e-43ed-bb8e-36da6e63b5a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" Mar 13 12:40:45.994410 master-0 kubenswrapper[7518]: I0313 12:40:45.994372 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxcvd\" (UniqueName: \"kubernetes.io/projected/747659a6-4a1e-43ed-bb8e-36da6e63b5a1-kube-api-access-qxcvd\") pod \"control-plane-machine-set-operator-6686554ddc-btz8w\" (UID: \"747659a6-4a1e-43ed-bb8e-36da6e63b5a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" Mar 13 12:40:45.994486 master-0 kubenswrapper[7518]: I0313 12:40:45.994451 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw27v\" (UniqueName: \"kubernetes.io/projected/d5f63b6b-990a-444b-a954-d718036f2f6c-kube-api-access-rw27v\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:40:45.994486 master-0 kubenswrapper[7518]: I0313 12:40:45.994483 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5f63b6b-990a-444b-a954-d718036f2f6c-config\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:40:45.994564 master-0 kubenswrapper[7518]: I0313 12:40:45.994517 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:40:45.994564 master-0 kubenswrapper[7518]: I0313 12:40:45.994542 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f6992fed-b472-4a2d-a376-c5d72aa846d4-tmpfs\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:40:46.007898 master-0 kubenswrapper[7518]: I0313 12:40:46.007840 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-images\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:40:46.008889 master-0 kubenswrapper[7518]: I0313 12:40:46.008848 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-webhook-cert\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:40:46.009000 master-0 kubenswrapper[7518]: I0313 12:40:46.008976 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5f63b6b-990a-444b-a954-d718036f2f6c-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:40:46.011032 master-0 kubenswrapper[7518]: I0313 12:40:45.994245 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-images\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:46.011032 master-0 kubenswrapper[7518]: I0313 12:40:46.009296 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:46.012782 master-0 kubenswrapper[7518]: I0313 12:40:46.012027 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f6992fed-b472-4a2d-a376-c5d72aa846d4-tmpfs\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:40:46.012782 master-0 kubenswrapper[7518]: I0313 12:40:46.012592 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:40:46.012966 master-0 kubenswrapper[7518]: I0313 12:40:46.012935 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5f63b6b-990a-444b-a954-d718036f2f6c-config\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:40:46.013024 master-0 kubenswrapper[7518]: I0313 12:40:46.012981 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkvfp\" (UniqueName: \"kubernetes.io/projected/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-kube-api-access-mkvfp\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:40:46.013253 master-0 kubenswrapper[7518]: I0313 12:40:46.013122 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n75n\" (UniqueName: \"kubernetes.io/projected/f6992fed-b472-4a2d-a376-c5d72aa846d4-kube-api-access-4n75n\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:40:46.013338 master-0 kubenswrapper[7518]: I0313 12:40:46.010839 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:46.028239 master-0 kubenswrapper[7518]: I0313 12:40:46.028175 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-apiservice-cert\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:40:46.028974 master-0 kubenswrapper[7518]: I0313 12:40:46.028804 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v66x\" (UniqueName: \"kubernetes.io/projected/317af639-269e-4163-8e24-fcea468b9352-kube-api-access-4v66x\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:46.031772 master-0 kubenswrapper[7518]: I0313 12:40:46.031697 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw27v\" (UniqueName: \"kubernetes.io/projected/d5f63b6b-990a-444b-a954-d718036f2f6c-kube-api-access-rw27v\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:40:46.038556 master-0 kubenswrapper[7518]: I0313 12:40:46.038502 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/747659a6-4a1e-43ed-bb8e-36da6e63b5a1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-btz8w\" (UID: \"747659a6-4a1e-43ed-bb8e-36da6e63b5a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" Mar 13 12:40:46.039726 master-0 kubenswrapper[7518]: I0313 12:40:46.038578 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:40:46.039726 master-0 kubenswrapper[7518]: I0313 12:40:46.039439 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxcvd\" (UniqueName: \"kubernetes.io/projected/747659a6-4a1e-43ed-bb8e-36da6e63b5a1-kube-api-access-qxcvd\") pod \"control-plane-machine-set-operator-6686554ddc-btz8w\" (UID: \"747659a6-4a1e-43ed-bb8e-36da6e63b5a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" Mar 13 12:40:46.055988 master-0 kubenswrapper[7518]: I0313 12:40:46.055920 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:40:46.072275 master-0 kubenswrapper[7518]: I0313 12:40:46.071554 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" Mar 13 12:40:46.125350 master-0 kubenswrapper[7518]: W0313 12:40:46.125264 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c04ee08_4018_4cf3_b257_10aff84fa933.slice/crio-23393a7b4a00bfe83c72caa5d705a971138b778ff4160559e2fc7fe8054bd78a WatchSource:0}: Error finding container 23393a7b4a00bfe83c72caa5d705a971138b778ff4160559e2fc7fe8054bd78a: Status 404 returned error can't find the container with id 23393a7b4a00bfe83c72caa5d705a971138b778ff4160559e2fc7fe8054bd78a Mar 13 12:40:46.145665 master-0 kubenswrapper[7518]: I0313 12:40:46.145428 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:40:46.206944 master-0 kubenswrapper[7518]: I0313 12:40:46.206810 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" Mar 13 12:40:46.231100 master-0 kubenswrapper[7518]: I0313 12:40:46.230485 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:40:46.256621 master-0 kubenswrapper[7518]: I0313 12:40:46.255694 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:40:46.282529 master-0 kubenswrapper[7518]: I0313 12:40:46.278396 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" event={"ID":"1c9d5dee-f689-4813-8715-39a6d8ef1a7a","Type":"ContainerStarted","Data":"fcaed5aa41ed958d76ede9d948f637b929080d2798f592c0ea92d4ab32d2fb01"} Mar 13 12:40:46.285806 master-0 kubenswrapper[7518]: I0313 12:40:46.285169 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" event={"ID":"9c04ee08-4018-4cf3-b257-10aff84fa933","Type":"ContainerStarted","Data":"23393a7b4a00bfe83c72caa5d705a971138b778ff4160559e2fc7fe8054bd78a"} Mar 13 12:40:46.293210 master-0 kubenswrapper[7518]: I0313 12:40:46.292154 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:40:46.329483 master-0 kubenswrapper[7518]: I0313 12:40:46.309786 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:40:46.393610 master-0 kubenswrapper[7518]: I0313 12:40:46.393562 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-vxk8z"] Mar 13 12:40:46.576759 master-0 kubenswrapper[7518]: I0313 12:40:46.576706 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx"] Mar 13 12:40:46.600095 master-0 kubenswrapper[7518]: W0313 12:40:46.600050 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd44112d1_b2a5_4b8d_b74d_1e91638508d5.slice/crio-36730131d5c09051d26cf3e4a543df7abc5397cb1ce5ef8363c603313b0f97b0 WatchSource:0}: Error finding container 36730131d5c09051d26cf3e4a543df7abc5397cb1ce5ef8363c603313b0f97b0: Status 404 returned error can't find the container with id 36730131d5c09051d26cf3e4a543df7abc5397cb1ce5ef8363c603313b0f97b0 Mar 13 12:40:46.675782 master-0 kubenswrapper[7518]: I0313 12:40:46.675721 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499"] Mar 13 12:40:46.736323 master-0 kubenswrapper[7518]: I0313 12:40:46.735400 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp"] Mar 13 12:40:46.763602 master-0 kubenswrapper[7518]: I0313 12:40:46.763551 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5"] Mar 13 12:40:47.012402 master-0 kubenswrapper[7518]: I0313 12:40:47.010079 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz"] Mar 13 12:40:47.013898 master-0 kubenswrapper[7518]: I0313 12:40:47.013832 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c"] Mar 13 12:40:47.018709 master-0 kubenswrapper[7518]: W0313 12:40:47.018662 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5f63b6b_990a_444b_a954_d718036f2f6c.slice/crio-02065d5b43e51a34d865fcf740815dfc300cc50dd65b4465588c2f46e47c4755 WatchSource:0}: Error finding container 02065d5b43e51a34d865fcf740815dfc300cc50dd65b4465588c2f46e47c4755: Status 404 returned error can't find the container with id 02065d5b43e51a34d865fcf740815dfc300cc50dd65b4465588c2f46e47c4755 Mar 13 12:40:47.024282 master-0 kubenswrapper[7518]: W0313 12:40:47.022489 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87a5904a_55ca_416f_8aec_57a2b5194c5a.slice/crio-b699b1831b9f5250a8ce5ada14edbc693482d02c81ce7cd3de76c7bdd381af20 WatchSource:0}: Error finding container b699b1831b9f5250a8ce5ada14edbc693482d02c81ce7cd3de76c7bdd381af20: Status 404 returned error can't find the container with id b699b1831b9f5250a8ce5ada14edbc693482d02c81ce7cd3de76c7bdd381af20 Mar 13 12:40:47.032712 master-0 kubenswrapper[7518]: I0313 12:40:47.032644 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj"] Mar 13 12:40:47.064684 master-0 kubenswrapper[7518]: I0313 12:40:47.064633 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w"] Mar 13 12:40:47.088211 master-0 kubenswrapper[7518]: W0313 12:40:47.088086 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod747659a6_4a1e_43ed_bb8e_36da6e63b5a1.slice/crio-d6d4028b66b05354ce39cae63e764e8ed5f2304a82f8cd6cbd59c6a8537a5bed WatchSource:0}: Error finding container d6d4028b66b05354ce39cae63e764e8ed5f2304a82f8cd6cbd59c6a8537a5bed: Status 404 returned error can't find the container with id d6d4028b66b05354ce39cae63e764e8ed5f2304a82f8cd6cbd59c6a8537a5bed Mar 13 12:40:47.227894 master-0 kubenswrapper[7518]: I0313 12:40:47.227694 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp"] Mar 13 12:40:47.295526 master-0 kubenswrapper[7518]: I0313 12:40:47.295439 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" event={"ID":"d7d67915-d31e-46dc-bb2e-1a6f689dd875","Type":"ContainerStarted","Data":"33eb1753d1610b81e5a24f93d9249c8e3e11614421397b68063a0f4b3b803691"} Mar 13 12:40:47.299192 master-0 kubenswrapper[7518]: I0313 12:40:47.299150 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" event={"ID":"87a5904a-55ca-416f-8aec-57a2b5194c5a","Type":"ContainerStarted","Data":"65480d3584b099697952045362d3d1cf161923ed94d0502786dd56fbe17232b2"} Mar 13 12:40:47.299335 master-0 kubenswrapper[7518]: I0313 12:40:47.299202 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" event={"ID":"87a5904a-55ca-416f-8aec-57a2b5194c5a","Type":"ContainerStarted","Data":"b699b1831b9f5250a8ce5ada14edbc693482d02c81ce7cd3de76c7bdd381af20"} Mar 13 12:40:47.301509 master-0 kubenswrapper[7518]: I0313 12:40:47.301310 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" event={"ID":"747659a6-4a1e-43ed-bb8e-36da6e63b5a1","Type":"ContainerStarted","Data":"d6d4028b66b05354ce39cae63e764e8ed5f2304a82f8cd6cbd59c6a8537a5bed"} Mar 13 12:40:47.302858 master-0 kubenswrapper[7518]: I0313 12:40:47.302207 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" event={"ID":"50a2046b-092b-434c-92a2-579f4462c4fb","Type":"ContainerStarted","Data":"4a2e539e0bcc34335d49c02d69347bd6d8232a1bb972540a7de9aececb6d671f"} Mar 13 12:40:47.305390 master-0 kubenswrapper[7518]: I0313 12:40:47.305358 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" event={"ID":"d44112d1-b2a5-4b8d-b74d-1e91638508d5","Type":"ContainerStarted","Data":"d88e6617ab5b759a1d911587006afbeecc26b6d8ebb1d323aec1c5d9ddc1ccf4"} Mar 13 12:40:47.305514 master-0 kubenswrapper[7518]: I0313 12:40:47.305397 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" event={"ID":"d44112d1-b2a5-4b8d-b74d-1e91638508d5","Type":"ContainerStarted","Data":"36730131d5c09051d26cf3e4a543df7abc5397cb1ce5ef8363c603313b0f97b0"} Mar 13 12:40:47.308126 master-0 kubenswrapper[7518]: I0313 12:40:47.308047 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" event={"ID":"d5f63b6b-990a-444b-a954-d718036f2f6c","Type":"ContainerStarted","Data":"1328aeccef745bce72e1ab5770dd72814ca894eae59fa8b24512c232156a9140"} Mar 13 12:40:47.308389 master-0 kubenswrapper[7518]: I0313 12:40:47.308369 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" event={"ID":"d5f63b6b-990a-444b-a954-d718036f2f6c","Type":"ContainerStarted","Data":"02065d5b43e51a34d865fcf740815dfc300cc50dd65b4465588c2f46e47c4755"} Mar 13 12:40:47.310422 master-0 kubenswrapper[7518]: I0313 12:40:47.310392 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" event={"ID":"9c04ee08-4018-4cf3-b257-10aff84fa933","Type":"ContainerStarted","Data":"5b7cd563f3784e45e59cf37c881b5b8f9b7e5cf2039e3c23634bce0b52425d70"} Mar 13 12:40:47.311739 master-0 kubenswrapper[7518]: I0313 12:40:47.311701 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" event={"ID":"d39ee5d7-840e-4481-b0b9-baf34da2c7b1","Type":"ContainerStarted","Data":"5d54ffc470f89711bfd74406a6ddbacbe1dd4ef841888f957b998a6253057999"} Mar 13 12:40:47.312743 master-0 kubenswrapper[7518]: I0313 12:40:47.312720 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" event={"ID":"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f","Type":"ContainerStarted","Data":"6b92704fbc97116df7b90609a695c48539a6c6401fd9288883ce4ea92059b841"} Mar 13 12:40:47.314231 master-0 kubenswrapper[7518]: I0313 12:40:47.314101 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" event={"ID":"317af639-269e-4163-8e24-fcea468b9352","Type":"ContainerStarted","Data":"5392be7ab4e8fd67e380477649b224dee24aa1e239336e87f916d5fb0198c7d5"} Mar 13 12:40:51.920360 master-0 kubenswrapper[7518]: W0313 12:40:51.920284 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6992fed_b472_4a2d_a376_c5d72aa846d4.slice/crio-c877a6a31ad16c9b3f6d1a10e247940a86d22f389ab82d4b655a52c5c8ebc0a4 WatchSource:0}: Error finding container c877a6a31ad16c9b3f6d1a10e247940a86d22f389ab82d4b655a52c5c8ebc0a4: Status 404 returned error can't find the container with id c877a6a31ad16c9b3f6d1a10e247940a86d22f389ab82d4b655a52c5c8ebc0a4 Mar 13 12:40:52.514781 master-0 kubenswrapper[7518]: I0313 12:40:52.514733 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" event={"ID":"f6992fed-b472-4a2d-a376-c5d72aa846d4","Type":"ContainerStarted","Data":"c877a6a31ad16c9b3f6d1a10e247940a86d22f389ab82d4b655a52c5c8ebc0a4"} Mar 13 12:40:52.516340 master-0 kubenswrapper[7518]: I0313 12:40:52.516293 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" event={"ID":"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f","Type":"ContainerStarted","Data":"f651f87ff531c82cf300379fcb01d86f8ea9306940ee3ed2300a4c0ed8856e65"} Mar 13 12:40:58.330148 master-0 kubenswrapper[7518]: I0313 12:40:58.330017 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng"] Mar 13 12:40:58.408160 master-0 kubenswrapper[7518]: I0313 12:40:58.404979 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5h8rc"] Mar 13 12:40:58.408160 master-0 kubenswrapper[7518]: I0313 12:40:58.405918 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:40:58.412281 master-0 kubenswrapper[7518]: I0313 12:40:58.410009 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-6hdw2" Mar 13 12:40:58.412281 master-0 kubenswrapper[7518]: I0313 12:40:58.410049 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 12:40:58.588850 master-0 kubenswrapper[7518]: I0313 12:40:58.588626 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbwwp\" (UniqueName: \"kubernetes.io/projected/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-kube-api-access-jbwwp\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:40:58.588850 master-0 kubenswrapper[7518]: I0313 12:40:58.588750 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-rootfs\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:40:58.588850 master-0 kubenswrapper[7518]: I0313 12:40:58.588774 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-proxy-tls\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:40:58.588850 master-0 kubenswrapper[7518]: I0313 12:40:58.588804 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-mcd-auth-proxy-config\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:40:58.691751 master-0 kubenswrapper[7518]: I0313 12:40:58.691684 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbwwp\" (UniqueName: \"kubernetes.io/projected/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-kube-api-access-jbwwp\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:40:58.691982 master-0 kubenswrapper[7518]: I0313 12:40:58.691895 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-rootfs\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:40:58.691982 master-0 kubenswrapper[7518]: I0313 12:40:58.691916 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-proxy-tls\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:40:58.691982 master-0 kubenswrapper[7518]: I0313 12:40:58.691931 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-mcd-auth-proxy-config\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:40:58.692272 master-0 kubenswrapper[7518]: I0313 12:40:58.692245 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-rootfs\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:40:58.693122 master-0 kubenswrapper[7518]: I0313 12:40:58.693094 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-mcd-auth-proxy-config\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:40:58.695699 master-0 kubenswrapper[7518]: I0313 12:40:58.695670 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-proxy-tls\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:40:58.707759 master-0 kubenswrapper[7518]: I0313 12:40:58.707608 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbwwp\" (UniqueName: \"kubernetes.io/projected/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-kube-api-access-jbwwp\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:40:58.745771 master-0 kubenswrapper[7518]: I0313 12:40:58.745722 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:41:04.134401 master-0 kubenswrapper[7518]: I0313 12:41:04.134338 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9"] Mar 13 12:41:22.189254 master-0 kubenswrapper[7518]: W0313 12:41:22.188492 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50be3c2b_284b_4f60_b4ed_2cc7b4e528fa.slice/crio-6096081d86dfbfa09ca1bdec91da24d4ddf5b823468c93d6e9e22822357294bc WatchSource:0}: Error finding container 6096081d86dfbfa09ca1bdec91da24d4ddf5b823468c93d6e9e22822357294bc: Status 404 returned error can't find the container with id 6096081d86dfbfa09ca1bdec91da24d4ddf5b823468c93d6e9e22822357294bc Mar 13 12:41:23.038164 master-0 kubenswrapper[7518]: I0313 12:41:23.037935 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" event={"ID":"317af639-269e-4163-8e24-fcea468b9352","Type":"ContainerStarted","Data":"31592103bc0b8de889024ea6d6f7d7d81a7a97c8aa34c21b276d7003e983eaa5"} Mar 13 12:41:23.048551 master-0 kubenswrapper[7518]: I0313 12:41:23.047834 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" event={"ID":"87a5904a-55ca-416f-8aec-57a2b5194c5a","Type":"ContainerStarted","Data":"9a9b0252ef673526758abf2228a24687d536ba64c6690917aef73fbbdd412cdb"} Mar 13 12:41:23.058842 master-0 kubenswrapper[7518]: I0313 12:41:23.058751 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9vk" event={"ID":"4f9e6618-62b5-4181-b545-211461811140","Type":"ContainerStarted","Data":"da77080b839f8955665824806fb0d5eb5b65bd0dc7a075af96258d22af1ed733"} Mar 13 12:41:23.072490 master-0 kubenswrapper[7518]: I0313 12:41:23.072435 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" event={"ID":"d5f63b6b-990a-444b-a954-d718036f2f6c","Type":"ContainerStarted","Data":"a1bfd1c6ad70388a89e3729992c8e63cc9ebf64d39d05c00f30ae59118fb80de"} Mar 13 12:41:23.078819 master-0 kubenswrapper[7518]: I0313 12:41:23.078629 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" event={"ID":"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa","Type":"ContainerStarted","Data":"f65b5bf64c23567612743ebd11b2bfaf8de81b0cebfdc629df35893768ea2671"} Mar 13 12:41:23.078819 master-0 kubenswrapper[7518]: I0313 12:41:23.078705 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" event={"ID":"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa","Type":"ContainerStarted","Data":"6096081d86dfbfa09ca1bdec91da24d4ddf5b823468c93d6e9e22822357294bc"} Mar 13 12:41:23.092934 master-0 kubenswrapper[7518]: I0313 12:41:23.092809 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" event={"ID":"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f","Type":"ContainerStarted","Data":"8d9c37d5f89837146534526d23a90563a1224fca67daef17c48c7ff9590271c2"} Mar 13 12:41:23.108794 master-0 kubenswrapper[7518]: I0313 12:41:23.106498 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" event={"ID":"f6992fed-b472-4a2d-a376-c5d72aa846d4","Type":"ContainerStarted","Data":"84f9a0df4791a747c81cd7dded52ad06480e91c19880fa998b8cedff4f1590e8"} Mar 13 12:41:23.108794 master-0 kubenswrapper[7518]: I0313 12:41:23.107578 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:41:23.129190 master-0 kubenswrapper[7518]: I0313 12:41:23.128421 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:41:23.138186 master-0 kubenswrapper[7518]: I0313 12:41:23.132472 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" event={"ID":"50a2046b-092b-434c-92a2-579f4462c4fb","Type":"ContainerStarted","Data":"56dae0ba4c088a315b25b8f29fd307e026f7b9dab3dc91255527ff6680c31464"} Mar 13 12:41:23.164238 master-0 kubenswrapper[7518]: I0313 12:41:23.163509 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9csk" event={"ID":"1cf388b6-e4a7-41db-a350-1b503214efd3","Type":"ContainerStarted","Data":"ad67ae9abd7e29e1c8108cc236bfa4a285963e407827b35369107a92e21b73f3"} Mar 13 12:41:23.172168 master-0 kubenswrapper[7518]: I0313 12:41:23.168276 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" event={"ID":"1c9d5dee-f689-4813-8715-39a6d8ef1a7a","Type":"ContainerStarted","Data":"f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3"} Mar 13 12:41:23.172168 master-0 kubenswrapper[7518]: I0313 12:41:23.171579 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" event={"ID":"d7d67915-d31e-46dc-bb2e-1a6f689dd875","Type":"ContainerStarted","Data":"39a04612253a7a25dd9ded024c4c70cc0d933a3064b287c0c85c828db13d75e3"} Mar 13 12:41:23.175493 master-0 kubenswrapper[7518]: I0313 12:41:23.173591 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" event={"ID":"747659a6-4a1e-43ed-bb8e-36da6e63b5a1","Type":"ContainerStarted","Data":"fa510582aea2f9e7beb06130b537cab1524760c3e6ed427ab1be5150bea793b0"} Mar 13 12:41:23.176872 master-0 kubenswrapper[7518]: I0313 12:41:23.176376 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5czx2" event={"ID":"32fe77f9-082d-491c-b3d0-9c10feaf4a8e","Type":"ContainerStarted","Data":"9868ebc7add2931fb8b9f0e690fb3b5b7d50ca28093f5dd4662eaa27a2ef163c"} Mar 13 12:41:23.183157 master-0 kubenswrapper[7518]: I0313 12:41:23.180867 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" event={"ID":"d44112d1-b2a5-4b8d-b74d-1e91638508d5","Type":"ContainerStarted","Data":"aeb8cd6b223367e97ad7707f8724ad7c61808803218a16a895fbd3c7f77d6e4e"} Mar 13 12:41:23.187161 master-0 kubenswrapper[7518]: I0313 12:41:23.183283 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh888" event={"ID":"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71","Type":"ContainerStarted","Data":"dd56e097741179afb3ac4701cd79d5bbed72130ac8652ed79bed32f03419cdcf"} Mar 13 12:41:23.187161 master-0 kubenswrapper[7518]: I0313 12:41:23.185656 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" event={"ID":"9c04ee08-4018-4cf3-b257-10aff84fa933","Type":"ContainerStarted","Data":"c267741c163ec9d357d92a52798c9665cb3546da02abc9076214ab299818c7b0"} Mar 13 12:41:23.187161 master-0 kubenswrapper[7518]: I0313 12:41:23.185854 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" podUID="9c04ee08-4018-4cf3-b257-10aff84fa933" containerName="kube-rbac-proxy" containerID="cri-o://5b7cd563f3784e45e59cf37c881b5b8f9b7e5cf2039e3c23634bce0b52425d70" gracePeriod=30 Mar 13 12:41:23.187161 master-0 kubenswrapper[7518]: I0313 12:41:23.185942 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" podUID="9c04ee08-4018-4cf3-b257-10aff84fa933" containerName="machine-approver-controller" containerID="cri-o://c267741c163ec9d357d92a52798c9665cb3546da02abc9076214ab299818c7b0" gracePeriod=30 Mar 13 12:41:23.194621 master-0 kubenswrapper[7518]: I0313 12:41:23.190090 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" event={"ID":"d39ee5d7-840e-4481-b0b9-baf34da2c7b1","Type":"ContainerStarted","Data":"6b7e8dcbc3a8b95e83e20431ee6e54b36e764437af0e46aed32da6bad0bfa463"} Mar 13 12:41:23.194621 master-0 kubenswrapper[7518]: I0313 12:41:23.190151 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" event={"ID":"d39ee5d7-840e-4481-b0b9-baf34da2c7b1","Type":"ContainerStarted","Data":"23bab37f4e309627c482e8ca053728ed897ba78843ed2ea826207abd535ad881"} Mar 13 12:41:24.413689 master-0 kubenswrapper[7518]: I0313 12:41:24.413597 7518 generic.go:334] "Generic (PLEG): container finished" podID="1cf388b6-e4a7-41db-a350-1b503214efd3" containerID="ad67ae9abd7e29e1c8108cc236bfa4a285963e407827b35369107a92e21b73f3" exitCode=0 Mar 13 12:41:24.413689 master-0 kubenswrapper[7518]: I0313 12:41:24.413701 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9csk" event={"ID":"1cf388b6-e4a7-41db-a350-1b503214efd3","Type":"ContainerDied","Data":"ad67ae9abd7e29e1c8108cc236bfa4a285963e407827b35369107a92e21b73f3"} Mar 13 12:41:24.418586 master-0 kubenswrapper[7518]: I0313 12:41:24.418514 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" event={"ID":"317af639-269e-4163-8e24-fcea468b9352","Type":"ContainerStarted","Data":"792dbc57825b071e883735d4700716179a4c76a2ebcd54391766e492a98aa193"} Mar 13 12:41:24.420977 master-0 kubenswrapper[7518]: I0313 12:41:24.420909 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" event={"ID":"1c9d5dee-f689-4813-8715-39a6d8ef1a7a","Type":"ContainerStarted","Data":"164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161"} Mar 13 12:41:24.422332 master-0 kubenswrapper[7518]: I0313 12:41:24.422290 7518 generic.go:334] "Generic (PLEG): container finished" podID="4f9e6618-62b5-4181-b545-211461811140" containerID="da77080b839f8955665824806fb0d5eb5b65bd0dc7a075af96258d22af1ed733" exitCode=0 Mar 13 12:41:24.422426 master-0 kubenswrapper[7518]: I0313 12:41:24.422343 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9vk" event={"ID":"4f9e6618-62b5-4181-b545-211461811140","Type":"ContainerDied","Data":"da77080b839f8955665824806fb0d5eb5b65bd0dc7a075af96258d22af1ed733"} Mar 13 12:41:24.424740 master-0 kubenswrapper[7518]: I0313 12:41:24.424687 7518 generic.go:334] "Generic (PLEG): container finished" podID="e0ce4c51-2b9f-410f-93e5-9c2ff718dd71" containerID="dd56e097741179afb3ac4701cd79d5bbed72130ac8652ed79bed32f03419cdcf" exitCode=0 Mar 13 12:41:24.424821 master-0 kubenswrapper[7518]: I0313 12:41:24.424742 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh888" event={"ID":"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71","Type":"ContainerDied","Data":"dd56e097741179afb3ac4701cd79d5bbed72130ac8652ed79bed32f03419cdcf"} Mar 13 12:41:24.426991 master-0 kubenswrapper[7518]: I0313 12:41:24.426942 7518 generic.go:334] "Generic (PLEG): container finished" podID="9c04ee08-4018-4cf3-b257-10aff84fa933" containerID="5b7cd563f3784e45e59cf37c881b5b8f9b7e5cf2039e3c23634bce0b52425d70" exitCode=0 Mar 13 12:41:24.427066 master-0 kubenswrapper[7518]: I0313 12:41:24.426999 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" event={"ID":"9c04ee08-4018-4cf3-b257-10aff84fa933","Type":"ContainerDied","Data":"5b7cd563f3784e45e59cf37c881b5b8f9b7e5cf2039e3c23634bce0b52425d70"} Mar 13 12:41:24.429445 master-0 kubenswrapper[7518]: I0313 12:41:24.429407 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" event={"ID":"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa","Type":"ContainerStarted","Data":"b935e64397b9d8847f7706c763c2366545eab5807bc53d766a9b89eb54b65188"} Mar 13 12:41:25.514789 master-0 kubenswrapper[7518]: I0313 12:41:25.513677 7518 generic.go:334] "Generic (PLEG): container finished" podID="9c04ee08-4018-4cf3-b257-10aff84fa933" containerID="c267741c163ec9d357d92a52798c9665cb3546da02abc9076214ab299818c7b0" exitCode=0 Mar 13 12:41:25.514789 master-0 kubenswrapper[7518]: I0313 12:41:25.513761 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" event={"ID":"9c04ee08-4018-4cf3-b257-10aff84fa933","Type":"ContainerDied","Data":"c267741c163ec9d357d92a52798c9665cb3546da02abc9076214ab299818c7b0"} Mar 13 12:41:25.522972 master-0 kubenswrapper[7518]: I0313 12:41:25.522914 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" event={"ID":"1c9d5dee-f689-4813-8715-39a6d8ef1a7a","Type":"ContainerStarted","Data":"5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57"} Mar 13 12:41:25.523205 master-0 kubenswrapper[7518]: I0313 12:41:25.523168 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" podUID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" containerName="config-sync-controllers" containerID="cri-o://164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161" gracePeriod=30 Mar 13 12:41:25.523318 master-0 kubenswrapper[7518]: I0313 12:41:25.523297 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" podUID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" containerName="kube-rbac-proxy" containerID="cri-o://5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57" gracePeriod=30 Mar 13 12:41:25.523385 master-0 kubenswrapper[7518]: I0313 12:41:25.523369 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" podUID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" containerName="cluster-cloud-controller-manager" containerID="cri-o://f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3" gracePeriod=30 Mar 13 12:41:25.973979 master-0 kubenswrapper[7518]: I0313 12:41:25.973934 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:41:26.022644 master-0 kubenswrapper[7518]: I0313 12:41:26.001882 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" podStartSLOduration=6.031190161 podStartE2EDuration="41.001844618s" podCreationTimestamp="2026-03-13 12:40:45 +0000 UTC" firstStartedPulling="2026-03-13 12:40:47.242117996 +0000 UTC m=+201.875187173" lastFinishedPulling="2026-03-13 12:41:22.212772443 +0000 UTC m=+236.845841630" observedRunningTime="2026-03-13 12:41:24.575847829 +0000 UTC m=+239.208917036" watchObservedRunningTime="2026-03-13 12:41:26.001844618 +0000 UTC m=+240.634913805" Mar 13 12:41:26.047239 master-0 kubenswrapper[7518]: I0313 12:41:26.041706 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkvzr\" (UniqueName: \"kubernetes.io/projected/9c04ee08-4018-4cf3-b257-10aff84fa933-kube-api-access-lkvzr\") pod \"9c04ee08-4018-4cf3-b257-10aff84fa933\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " Mar 13 12:41:26.047239 master-0 kubenswrapper[7518]: I0313 12:41:26.041764 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9c04ee08-4018-4cf3-b257-10aff84fa933-machine-approver-tls\") pod \"9c04ee08-4018-4cf3-b257-10aff84fa933\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " Mar 13 12:41:26.047239 master-0 kubenswrapper[7518]: I0313 12:41:26.041803 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9c04ee08-4018-4cf3-b257-10aff84fa933-auth-proxy-config\") pod \"9c04ee08-4018-4cf3-b257-10aff84fa933\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " Mar 13 12:41:26.047239 master-0 kubenswrapper[7518]: I0313 12:41:26.041926 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c04ee08-4018-4cf3-b257-10aff84fa933-config\") pod \"9c04ee08-4018-4cf3-b257-10aff84fa933\" (UID: \"9c04ee08-4018-4cf3-b257-10aff84fa933\") " Mar 13 12:41:26.047239 master-0 kubenswrapper[7518]: I0313 12:41:26.042939 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c04ee08-4018-4cf3-b257-10aff84fa933-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "9c04ee08-4018-4cf3-b257-10aff84fa933" (UID: "9c04ee08-4018-4cf3-b257-10aff84fa933"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:41:26.052230 master-0 kubenswrapper[7518]: I0313 12:41:26.048397 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c04ee08-4018-4cf3-b257-10aff84fa933-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "9c04ee08-4018-4cf3-b257-10aff84fa933" (UID: "9c04ee08-4018-4cf3-b257-10aff84fa933"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:41:26.054192 master-0 kubenswrapper[7518]: I0313 12:41:26.052479 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c04ee08-4018-4cf3-b257-10aff84fa933-kube-api-access-lkvzr" (OuterVolumeSpecName: "kube-api-access-lkvzr") pod "9c04ee08-4018-4cf3-b257-10aff84fa933" (UID: "9c04ee08-4018-4cf3-b257-10aff84fa933"). InnerVolumeSpecName "kube-api-access-lkvzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:41:26.054192 master-0 kubenswrapper[7518]: I0313 12:41:26.053415 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c04ee08-4018-4cf3-b257-10aff84fa933-config" (OuterVolumeSpecName: "config") pod "9c04ee08-4018-4cf3-b257-10aff84fa933" (UID: "9c04ee08-4018-4cf3-b257-10aff84fa933"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:41:26.080699 master-0 kubenswrapper[7518]: I0313 12:41:26.077595 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" podStartSLOduration=5.740795617 podStartE2EDuration="41.077574488s" podCreationTimestamp="2026-03-13 12:40:45 +0000 UTC" firstStartedPulling="2026-03-13 12:40:46.835510646 +0000 UTC m=+201.468579833" lastFinishedPulling="2026-03-13 12:41:22.172289517 +0000 UTC m=+236.805358704" observedRunningTime="2026-03-13 12:41:26.077073325 +0000 UTC m=+240.710142512" watchObservedRunningTime="2026-03-13 12:41:26.077574488 +0000 UTC m=+240.710643675" Mar 13 12:41:26.116218 master-0 kubenswrapper[7518]: I0313 12:41:26.113848 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" podStartSLOduration=41.113826698 podStartE2EDuration="41.113826698s" podCreationTimestamp="2026-03-13 12:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:41:26.112087425 +0000 UTC m=+240.745156622" watchObservedRunningTime="2026-03-13 12:41:26.113826698 +0000 UTC m=+240.746895885" Mar 13 12:41:26.146281 master-0 kubenswrapper[7518]: I0313 12:41:26.146232 7518 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c04ee08-4018-4cf3-b257-10aff84fa933-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:41:26.146281 master-0 kubenswrapper[7518]: I0313 12:41:26.146273 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkvzr\" (UniqueName: \"kubernetes.io/projected/9c04ee08-4018-4cf3-b257-10aff84fa933-kube-api-access-lkvzr\") on node \"master-0\" DevicePath \"\"" Mar 13 12:41:26.146281 master-0 kubenswrapper[7518]: I0313 12:41:26.146287 7518 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9c04ee08-4018-4cf3-b257-10aff84fa933-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 12:41:26.146561 master-0 kubenswrapper[7518]: I0313 12:41:26.146300 7518 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9c04ee08-4018-4cf3-b257-10aff84fa933-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:41:26.528121 master-0 kubenswrapper[7518]: I0313 12:41:26.528028 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" podStartSLOduration=9.249873851 podStartE2EDuration="41.528001272s" podCreationTimestamp="2026-03-13 12:40:45 +0000 UTC" firstStartedPulling="2026-03-13 12:40:46.618187413 +0000 UTC m=+201.251256600" lastFinishedPulling="2026-03-13 12:41:18.896314834 +0000 UTC m=+233.529384021" observedRunningTime="2026-03-13 12:41:26.513503302 +0000 UTC m=+241.146572499" watchObservedRunningTime="2026-03-13 12:41:26.528001272 +0000 UTC m=+241.161070469" Mar 13 12:41:26.541164 master-0 kubenswrapper[7518]: I0313 12:41:26.540749 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" event={"ID":"9c04ee08-4018-4cf3-b257-10aff84fa933","Type":"ContainerDied","Data":"23393a7b4a00bfe83c72caa5d705a971138b778ff4160559e2fc7fe8054bd78a"} Mar 13 12:41:26.541164 master-0 kubenswrapper[7518]: I0313 12:41:26.540816 7518 scope.go:117] "RemoveContainer" containerID="c267741c163ec9d357d92a52798c9665cb3546da02abc9076214ab299818c7b0" Mar 13 12:41:26.541164 master-0 kubenswrapper[7518]: I0313 12:41:26.540918 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng" Mar 13 12:41:26.542350 master-0 kubenswrapper[7518]: I0313 12:41:26.542170 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:41:26.543047 master-0 kubenswrapper[7518]: I0313 12:41:26.542624 7518 generic.go:334] "Generic (PLEG): container finished" podID="32fe77f9-082d-491c-b3d0-9c10feaf4a8e" containerID="9868ebc7add2931fb8b9f0e690fb3b5b7d50ca28093f5dd4662eaa27a2ef163c" exitCode=0 Mar 13 12:41:26.543047 master-0 kubenswrapper[7518]: I0313 12:41:26.542680 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5czx2" event={"ID":"32fe77f9-082d-491c-b3d0-9c10feaf4a8e","Type":"ContainerDied","Data":"9868ebc7add2931fb8b9f0e690fb3b5b7d50ca28093f5dd4662eaa27a2ef163c"} Mar 13 12:41:26.545572 master-0 kubenswrapper[7518]: I0313 12:41:26.545507 7518 generic.go:334] "Generic (PLEG): container finished" podID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" containerID="5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57" exitCode=0 Mar 13 12:41:26.545572 master-0 kubenswrapper[7518]: I0313 12:41:26.545543 7518 generic.go:334] "Generic (PLEG): container finished" podID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" containerID="164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161" exitCode=0 Mar 13 12:41:26.545572 master-0 kubenswrapper[7518]: I0313 12:41:26.545561 7518 generic.go:334] "Generic (PLEG): container finished" podID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" containerID="f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3" exitCode=0 Mar 13 12:41:26.545775 master-0 kubenswrapper[7518]: I0313 12:41:26.545583 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" event={"ID":"1c9d5dee-f689-4813-8715-39a6d8ef1a7a","Type":"ContainerDied","Data":"5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57"} Mar 13 12:41:26.545775 master-0 kubenswrapper[7518]: I0313 12:41:26.545615 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" event={"ID":"1c9d5dee-f689-4813-8715-39a6d8ef1a7a","Type":"ContainerDied","Data":"164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161"} Mar 13 12:41:26.545775 master-0 kubenswrapper[7518]: I0313 12:41:26.545627 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" event={"ID":"1c9d5dee-f689-4813-8715-39a6d8ef1a7a","Type":"ContainerDied","Data":"f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3"} Mar 13 12:41:26.545775 master-0 kubenswrapper[7518]: I0313 12:41:26.545681 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" event={"ID":"1c9d5dee-f689-4813-8715-39a6d8ef1a7a","Type":"ContainerDied","Data":"fcaed5aa41ed958d76ede9d948f637b929080d2798f592c0ea92d4ab32d2fb01"} Mar 13 12:41:26.545775 master-0 kubenswrapper[7518]: I0313 12:41:26.545745 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9" Mar 13 12:41:26.564787 master-0 kubenswrapper[7518]: I0313 12:41:26.564751 7518 scope.go:117] "RemoveContainer" containerID="5b7cd563f3784e45e59cf37c881b5b8f9b7e5cf2039e3c23634bce0b52425d70" Mar 13 12:41:26.589919 master-0 kubenswrapper[7518]: I0313 12:41:26.589865 7518 scope.go:117] "RemoveContainer" containerID="5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57" Mar 13 12:41:26.605723 master-0 kubenswrapper[7518]: I0313 12:41:26.605664 7518 scope.go:117] "RemoveContainer" containerID="164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161" Mar 13 12:41:26.624766 master-0 kubenswrapper[7518]: I0313 12:41:26.624715 7518 scope.go:117] "RemoveContainer" containerID="f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3" Mar 13 12:41:26.651256 master-0 kubenswrapper[7518]: I0313 12:41:26.651221 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-cloud-controller-manager-operator-tls\") pod \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " Mar 13 12:41:26.651364 master-0 kubenswrapper[7518]: I0313 12:41:26.651273 7518 scope.go:117] "RemoveContainer" containerID="5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57" Mar 13 12:41:26.651364 master-0 kubenswrapper[7518]: I0313 12:41:26.651299 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-host-etc-kube\") pod \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " Mar 13 12:41:26.651448 master-0 kubenswrapper[7518]: I0313 12:41:26.651382 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crc29\" (UniqueName: \"kubernetes.io/projected/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-kube-api-access-crc29\") pod \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " Mar 13 12:41:26.651448 master-0 kubenswrapper[7518]: I0313 12:41:26.651388 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "1c9d5dee-f689-4813-8715-39a6d8ef1a7a" (UID: "1c9d5dee-f689-4813-8715-39a6d8ef1a7a"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:41:26.651448 master-0 kubenswrapper[7518]: I0313 12:41:26.651436 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-auth-proxy-config\") pod \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " Mar 13 12:41:26.651551 master-0 kubenswrapper[7518]: I0313 12:41:26.651462 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-images\") pod \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\" (UID: \"1c9d5dee-f689-4813-8715-39a6d8ef1a7a\") " Mar 13 12:41:26.651802 master-0 kubenswrapper[7518]: I0313 12:41:26.651782 7518 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 13 12:41:26.651880 master-0 kubenswrapper[7518]: E0313 12:41:26.651789 7518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57\": container with ID starting with 5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57 not found: ID does not exist" containerID="5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57" Mar 13 12:41:26.651880 master-0 kubenswrapper[7518]: I0313 12:41:26.651848 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57"} err="failed to get container status \"5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57\": rpc error: code = NotFound desc = could not find container \"5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57\": container with ID starting with 5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57 not found: ID does not exist" Mar 13 12:41:26.651984 master-0 kubenswrapper[7518]: I0313 12:41:26.651883 7518 scope.go:117] "RemoveContainer" containerID="164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161" Mar 13 12:41:26.651984 master-0 kubenswrapper[7518]: I0313 12:41:26.651916 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "1c9d5dee-f689-4813-8715-39a6d8ef1a7a" (UID: "1c9d5dee-f689-4813-8715-39a6d8ef1a7a"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:41:26.651984 master-0 kubenswrapper[7518]: I0313 12:41:26.651944 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-images" (OuterVolumeSpecName: "images") pod "1c9d5dee-f689-4813-8715-39a6d8ef1a7a" (UID: "1c9d5dee-f689-4813-8715-39a6d8ef1a7a"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:41:26.652538 master-0 kubenswrapper[7518]: E0313 12:41:26.652508 7518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161\": container with ID starting with 164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161 not found: ID does not exist" containerID="164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161" Mar 13 12:41:26.652622 master-0 kubenswrapper[7518]: I0313 12:41:26.652544 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161"} err="failed to get container status \"164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161\": rpc error: code = NotFound desc = could not find container \"164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161\": container with ID starting with 164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161 not found: ID does not exist" Mar 13 12:41:26.652622 master-0 kubenswrapper[7518]: I0313 12:41:26.652571 7518 scope.go:117] "RemoveContainer" containerID="f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3" Mar 13 12:41:26.652827 master-0 kubenswrapper[7518]: E0313 12:41:26.652796 7518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3\": container with ID starting with f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3 not found: ID does not exist" containerID="f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3" Mar 13 12:41:26.652827 master-0 kubenswrapper[7518]: I0313 12:41:26.652816 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3"} err="failed to get container status \"f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3\": rpc error: code = NotFound desc = could not find container \"f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3\": container with ID starting with f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3 not found: ID does not exist" Mar 13 12:41:26.652932 master-0 kubenswrapper[7518]: I0313 12:41:26.652829 7518 scope.go:117] "RemoveContainer" containerID="5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57" Mar 13 12:41:26.653103 master-0 kubenswrapper[7518]: I0313 12:41:26.653068 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57"} err="failed to get container status \"5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57\": rpc error: code = NotFound desc = could not find container \"5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57\": container with ID starting with 5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57 not found: ID does not exist" Mar 13 12:41:26.653103 master-0 kubenswrapper[7518]: I0313 12:41:26.653097 7518 scope.go:117] "RemoveContainer" containerID="164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161" Mar 13 12:41:26.653437 master-0 kubenswrapper[7518]: I0313 12:41:26.653408 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161"} err="failed to get container status \"164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161\": rpc error: code = NotFound desc = could not find container \"164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161\": container with ID starting with 164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161 not found: ID does not exist" Mar 13 12:41:26.653528 master-0 kubenswrapper[7518]: I0313 12:41:26.653437 7518 scope.go:117] "RemoveContainer" containerID="f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3" Mar 13 12:41:26.653672 master-0 kubenswrapper[7518]: I0313 12:41:26.653634 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3"} err="failed to get container status \"f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3\": rpc error: code = NotFound desc = could not find container \"f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3\": container with ID starting with f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3 not found: ID does not exist" Mar 13 12:41:26.653672 master-0 kubenswrapper[7518]: I0313 12:41:26.653662 7518 scope.go:117] "RemoveContainer" containerID="5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57" Mar 13 12:41:26.653929 master-0 kubenswrapper[7518]: I0313 12:41:26.653902 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57"} err="failed to get container status \"5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57\": rpc error: code = NotFound desc = could not find container \"5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57\": container with ID starting with 5d6961fccd2091fd5ad1619324348eba577ce46136307968013df83f8fedbf57 not found: ID does not exist" Mar 13 12:41:26.653999 master-0 kubenswrapper[7518]: I0313 12:41:26.653930 7518 scope.go:117] "RemoveContainer" containerID="164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161" Mar 13 12:41:26.654233 master-0 kubenswrapper[7518]: I0313 12:41:26.654184 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161"} err="failed to get container status \"164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161\": rpc error: code = NotFound desc = could not find container \"164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161\": container with ID starting with 164370376ca5952da595f8cfee688809a7d8675f8a63a4fdcd74b5b88ea48161 not found: ID does not exist" Mar 13 12:41:26.654233 master-0 kubenswrapper[7518]: I0313 12:41:26.654208 7518 scope.go:117] "RemoveContainer" containerID="f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3" Mar 13 12:41:26.654500 master-0 kubenswrapper[7518]: I0313 12:41:26.654439 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3"} err="failed to get container status \"f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3\": rpc error: code = NotFound desc = could not find container \"f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3\": container with ID starting with f9f98dd4cd04cd7f35948ae5f10dbafff6e82a33296e03a3ea9e60186fe7b8c3 not found: ID does not exist" Mar 13 12:41:26.657350 master-0 kubenswrapper[7518]: I0313 12:41:26.657322 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "1c9d5dee-f689-4813-8715-39a6d8ef1a7a" (UID: "1c9d5dee-f689-4813-8715-39a6d8ef1a7a"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:41:26.658062 master-0 kubenswrapper[7518]: I0313 12:41:26.658014 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-kube-api-access-crc29" (OuterVolumeSpecName: "kube-api-access-crc29") pod "1c9d5dee-f689-4813-8715-39a6d8ef1a7a" (UID: "1c9d5dee-f689-4813-8715-39a6d8ef1a7a"). InnerVolumeSpecName "kube-api-access-crc29". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:41:26.801260 master-0 kubenswrapper[7518]: I0313 12:41:26.801195 7518 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-images\") on node \"master-0\" DevicePath \"\"" Mar 13 12:41:26.801260 master-0 kubenswrapper[7518]: I0313 12:41:26.801230 7518 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:41:26.801260 master-0 kubenswrapper[7518]: I0313 12:41:26.801243 7518 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 12:41:26.801260 master-0 kubenswrapper[7518]: I0313 12:41:26.801255 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crc29\" (UniqueName: \"kubernetes.io/projected/1c9d5dee-f689-4813-8715-39a6d8ef1a7a-kube-api-access-crc29\") on node \"master-0\" DevicePath \"\"" Mar 13 12:41:27.471981 master-0 kubenswrapper[7518]: I0313 12:41:27.471890 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" podStartSLOduration=7.442483061 podStartE2EDuration="42.471849678s" podCreationTimestamp="2026-03-13 12:40:45 +0000 UTC" firstStartedPulling="2026-03-13 12:40:47.205498224 +0000 UTC m=+201.838567411" lastFinishedPulling="2026-03-13 12:41:22.234864841 +0000 UTC m=+236.867934028" observedRunningTime="2026-03-13 12:41:27.467643634 +0000 UTC m=+242.100712831" watchObservedRunningTime="2026-03-13 12:41:27.471849678 +0000 UTC m=+242.104918865" Mar 13 12:41:29.307162 master-0 kubenswrapper[7518]: I0313 12:41:29.291705 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" podStartSLOduration=8.898273964 podStartE2EDuration="44.291680366s" podCreationTimestamp="2026-03-13 12:40:45 +0000 UTC" firstStartedPulling="2026-03-13 12:40:46.795281953 +0000 UTC m=+201.428351140" lastFinishedPulling="2026-03-13 12:41:22.188688355 +0000 UTC m=+236.821757542" observedRunningTime="2026-03-13 12:41:28.214060889 +0000 UTC m=+242.847130086" watchObservedRunningTime="2026-03-13 12:41:29.291680366 +0000 UTC m=+243.924749563" Mar 13 12:41:29.340389 master-0 kubenswrapper[7518]: I0313 12:41:29.340294 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" podStartSLOduration=9.173936638 podStartE2EDuration="44.340275283s" podCreationTimestamp="2026-03-13 12:40:45 +0000 UTC" firstStartedPulling="2026-03-13 12:40:46.418192579 +0000 UTC m=+201.051261766" lastFinishedPulling="2026-03-13 12:41:21.584531224 +0000 UTC m=+236.217600411" observedRunningTime="2026-03-13 12:41:29.338744525 +0000 UTC m=+243.971813732" watchObservedRunningTime="2026-03-13 12:41:29.340275283 +0000 UTC m=+243.973344470" Mar 13 12:41:29.398551 master-0 kubenswrapper[7518]: I0313 12:41:29.396691 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" podStartSLOduration=31.396672633 podStartE2EDuration="31.396672633s" podCreationTimestamp="2026-03-13 12:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:41:29.367019807 +0000 UTC m=+244.000089014" watchObservedRunningTime="2026-03-13 12:41:29.396672633 +0000 UTC m=+244.029741820" Mar 13 12:41:29.398551 master-0 kubenswrapper[7518]: I0313 12:41:29.397566 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" podStartSLOduration=9.007626551 podStartE2EDuration="44.397558835s" podCreationTimestamp="2026-03-13 12:40:45 +0000 UTC" firstStartedPulling="2026-03-13 12:40:46.755265371 +0000 UTC m=+201.388334558" lastFinishedPulling="2026-03-13 12:41:22.145197655 +0000 UTC m=+236.778266842" observedRunningTime="2026-03-13 12:41:29.395730389 +0000 UTC m=+244.028799576" watchObservedRunningTime="2026-03-13 12:41:29.397558835 +0000 UTC m=+244.030628022" Mar 13 12:41:29.436830 master-0 kubenswrapper[7518]: I0313 12:41:29.436747 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" podStartSLOduration=44.436726318 podStartE2EDuration="44.436726318s" podCreationTimestamp="2026-03-13 12:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:41:29.429349494 +0000 UTC m=+244.062418681" watchObservedRunningTime="2026-03-13 12:41:29.436726318 +0000 UTC m=+244.069795505" Mar 13 12:41:29.494450 master-0 kubenswrapper[7518]: I0313 12:41:29.492059 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" podStartSLOduration=9.115109907 podStartE2EDuration="44.492037801s" podCreationTimestamp="2026-03-13 12:40:45 +0000 UTC" firstStartedPulling="2026-03-13 12:40:46.796045961 +0000 UTC m=+201.429115148" lastFinishedPulling="2026-03-13 12:41:22.172973845 +0000 UTC m=+236.806043042" observedRunningTime="2026-03-13 12:41:29.488800241 +0000 UTC m=+244.121869448" watchObservedRunningTime="2026-03-13 12:41:29.492037801 +0000 UTC m=+244.125106988" Mar 13 12:41:30.230826 master-0 kubenswrapper[7518]: I0313 12:41:30.230068 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" podStartSLOduration=10.142860856 podStartE2EDuration="45.23004055s" podCreationTimestamp="2026-03-13 12:40:45 +0000 UTC" firstStartedPulling="2026-03-13 12:40:47.125267001 +0000 UTC m=+201.758336188" lastFinishedPulling="2026-03-13 12:41:22.212446695 +0000 UTC m=+236.845515882" observedRunningTime="2026-03-13 12:41:29.989183247 +0000 UTC m=+244.622252444" watchObservedRunningTime="2026-03-13 12:41:30.23004055 +0000 UTC m=+244.863109757" Mar 13 12:41:30.287037 master-0 kubenswrapper[7518]: I0313 12:41:30.286953 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9"] Mar 13 12:41:30.287037 master-0 kubenswrapper[7518]: I0313 12:41:30.287045 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9ccr9"] Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: I0313 12:41:30.401960 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg"] Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: E0313 12:41:30.405414 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" containerName="config-sync-controllers" Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: I0313 12:41:30.405439 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" containerName="config-sync-controllers" Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: E0313 12:41:30.405449 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c04ee08-4018-4cf3-b257-10aff84fa933" containerName="machine-approver-controller" Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: I0313 12:41:30.405456 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c04ee08-4018-4cf3-b257-10aff84fa933" containerName="machine-approver-controller" Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: E0313 12:41:30.405466 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" containerName="kube-rbac-proxy" Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: I0313 12:41:30.405472 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" containerName="kube-rbac-proxy" Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: E0313 12:41:30.405488 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" containerName="cluster-cloud-controller-manager" Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: I0313 12:41:30.405495 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" containerName="cluster-cloud-controller-manager" Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: E0313 12:41:30.405505 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c04ee08-4018-4cf3-b257-10aff84fa933" containerName="kube-rbac-proxy" Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: I0313 12:41:30.405511 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c04ee08-4018-4cf3-b257-10aff84fa933" containerName="kube-rbac-proxy" Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: I0313 12:41:30.405613 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c04ee08-4018-4cf3-b257-10aff84fa933" containerName="kube-rbac-proxy" Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: I0313 12:41:30.405628 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" containerName="config-sync-controllers" Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: I0313 12:41:30.405635 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" containerName="kube-rbac-proxy" Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: I0313 12:41:30.405642 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" containerName="cluster-cloud-controller-manager" Mar 13 12:41:30.406161 master-0 kubenswrapper[7518]: I0313 12:41:30.405651 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c04ee08-4018-4cf3-b257-10aff84fa933" containerName="machine-approver-controller" Mar 13 12:41:30.413827 master-0 kubenswrapper[7518]: I0313 12:41:30.411702 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.543838 master-0 kubenswrapper[7518]: I0313 12:41:30.542289 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdxqb\" (UniqueName: \"kubernetes.io/projected/00d8a21b-701c-4334-9dda-34c28b417f42-kube-api-access-bdxqb\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.543838 master-0 kubenswrapper[7518]: I0313 12:41:30.542331 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-zpmf6" Mar 13 12:41:30.543838 master-0 kubenswrapper[7518]: I0313 12:41:30.542346 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.543838 master-0 kubenswrapper[7518]: I0313 12:41:30.542697 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.543838 master-0 kubenswrapper[7518]: I0313 12:41:30.542749 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/00d8a21b-701c-4334-9dda-34c28b417f42-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.543838 master-0 kubenswrapper[7518]: I0313 12:41:30.542704 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:41:30.543838 master-0 kubenswrapper[7518]: I0313 12:41:30.542854 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/00d8a21b-701c-4334-9dda-34c28b417f42-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.543838 master-0 kubenswrapper[7518]: I0313 12:41:30.542736 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 12:41:30.543838 master-0 kubenswrapper[7518]: I0313 12:41:30.542941 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:41:30.543838 master-0 kubenswrapper[7518]: I0313 12:41:30.543042 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 12:41:30.543838 master-0 kubenswrapper[7518]: I0313 12:41:30.543086 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 12:41:30.554699 master-0 kubenswrapper[7518]: I0313 12:41:30.552683 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng"] Mar 13 12:41:30.559155 master-0 kubenswrapper[7518]: I0313 12:41:30.555713 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-bffng"] Mar 13 12:41:30.564792 master-0 kubenswrapper[7518]: I0313 12:41:30.564752 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p"] Mar 13 12:41:30.568276 master-0 kubenswrapper[7518]: I0313 12:41:30.568229 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:41:30.574920 master-0 kubenswrapper[7518]: I0313 12:41:30.574565 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 12:41:30.574920 master-0 kubenswrapper[7518]: I0313 12:41:30.574604 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 12:41:30.574920 master-0 kubenswrapper[7518]: I0313 12:41:30.574565 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-lpcnm" Mar 13 12:41:30.583923 master-0 kubenswrapper[7518]: I0313 12:41:30.583817 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 12:41:30.584183 master-0 kubenswrapper[7518]: I0313 12:41:30.584018 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 12:41:30.584229 master-0 kubenswrapper[7518]: I0313 12:41:30.584203 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 12:41:30.696451 master-0 kubenswrapper[7518]: I0313 12:41:30.696369 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/00d8a21b-701c-4334-9dda-34c28b417f42-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.696680 master-0 kubenswrapper[7518]: I0313 12:41:30.696622 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/00d8a21b-701c-4334-9dda-34c28b417f42-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.698279 master-0 kubenswrapper[7518]: I0313 12:41:30.696765 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-config\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:41:30.698279 master-0 kubenswrapper[7518]: I0313 12:41:30.696850 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b12a6f33-70df-4832-ac3b-0d2b94125fbf-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:41:30.698279 master-0 kubenswrapper[7518]: I0313 12:41:30.697012 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p9dz\" (UniqueName: \"kubernetes.io/projected/b12a6f33-70df-4832-ac3b-0d2b94125fbf-kube-api-access-9p9dz\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:41:30.698279 master-0 kubenswrapper[7518]: I0313 12:41:30.697114 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/00d8a21b-701c-4334-9dda-34c28b417f42-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.698279 master-0 kubenswrapper[7518]: I0313 12:41:30.697159 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdxqb\" (UniqueName: \"kubernetes.io/projected/00d8a21b-701c-4334-9dda-34c28b417f42-kube-api-access-bdxqb\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.698279 master-0 kubenswrapper[7518]: I0313 12:41:30.697191 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:41:30.698279 master-0 kubenswrapper[7518]: I0313 12:41:30.697231 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.698279 master-0 kubenswrapper[7518]: I0313 12:41:30.697394 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.698279 master-0 kubenswrapper[7518]: I0313 12:41:30.698088 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.699834 master-0 kubenswrapper[7518]: I0313 12:41:30.699806 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.703242 master-0 kubenswrapper[7518]: I0313 12:41:30.702754 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/00d8a21b-701c-4334-9dda-34c28b417f42-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.721304 master-0 kubenswrapper[7518]: I0313 12:41:30.721228 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdxqb\" (UniqueName: \"kubernetes.io/projected/00d8a21b-701c-4334-9dda-34c28b417f42-kube-api-access-bdxqb\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:30.807739 master-0 kubenswrapper[7518]: I0313 12:41:30.806842 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-config\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:41:30.807739 master-0 kubenswrapper[7518]: I0313 12:41:30.806909 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b12a6f33-70df-4832-ac3b-0d2b94125fbf-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:41:30.807739 master-0 kubenswrapper[7518]: I0313 12:41:30.807084 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p9dz\" (UniqueName: \"kubernetes.io/projected/b12a6f33-70df-4832-ac3b-0d2b94125fbf-kube-api-access-9p9dz\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:41:30.807739 master-0 kubenswrapper[7518]: I0313 12:41:30.807232 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:41:30.807739 master-0 kubenswrapper[7518]: I0313 12:41:30.807417 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-config\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:41:30.808105 master-0 kubenswrapper[7518]: I0313 12:41:30.807921 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:41:30.810882 master-0 kubenswrapper[7518]: I0313 12:41:30.810793 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b12a6f33-70df-4832-ac3b-0d2b94125fbf-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:41:30.868450 master-0 kubenswrapper[7518]: I0313 12:41:30.868397 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:41:31.034474 master-0 kubenswrapper[7518]: W0313 12:41:31.034412 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00d8a21b_701c_4334_9dda_34c28b417f42.slice/crio-0d4645e0a294cbcc940fcfffa42d733be306f63d83bb6e85a675a05c4f244808 WatchSource:0}: Error finding container 0d4645e0a294cbcc940fcfffa42d733be306f63d83bb6e85a675a05c4f244808: Status 404 returned error can't find the container with id 0d4645e0a294cbcc940fcfffa42d733be306f63d83bb6e85a675a05c4f244808 Mar 13 12:41:31.632212 master-0 kubenswrapper[7518]: I0313 12:41:31.632045 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c9d5dee-f689-4813-8715-39a6d8ef1a7a" path="/var/lib/kubelet/pods/1c9d5dee-f689-4813-8715-39a6d8ef1a7a/volumes" Mar 13 12:41:31.633087 master-0 kubenswrapper[7518]: I0313 12:41:31.632939 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c04ee08-4018-4cf3-b257-10aff84fa933" path="/var/lib/kubelet/pods/9c04ee08-4018-4cf3-b257-10aff84fa933/volumes" Mar 13 12:41:31.865035 master-0 kubenswrapper[7518]: I0313 12:41:31.863066 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" event={"ID":"00d8a21b-701c-4334-9dda-34c28b417f42","Type":"ContainerStarted","Data":"fb3e994e087a482374a8017dea545f1ddec09a849b0d0cb7b635b7b86e084f9a"} Mar 13 12:41:31.865035 master-0 kubenswrapper[7518]: I0313 12:41:31.863129 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" event={"ID":"00d8a21b-701c-4334-9dda-34c28b417f42","Type":"ContainerStarted","Data":"0d4645e0a294cbcc940fcfffa42d733be306f63d83bb6e85a675a05c4f244808"} Mar 13 12:41:32.538684 master-0 kubenswrapper[7518]: I0313 12:41:32.538636 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p9dz\" (UniqueName: \"kubernetes.io/projected/b12a6f33-70df-4832-ac3b-0d2b94125fbf-kube-api-access-9p9dz\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:41:32.731572 master-0 kubenswrapper[7518]: I0313 12:41:32.731437 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:41:32.751163 master-0 kubenswrapper[7518]: W0313 12:41:32.751097 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb12a6f33_70df_4832_ac3b_0d2b94125fbf.slice/crio-f02f7e100e251060c54156f4f1beac07154b4cae59d3669639dcb3b98dca6124 WatchSource:0}: Error finding container f02f7e100e251060c54156f4f1beac07154b4cae59d3669639dcb3b98dca6124: Status 404 returned error can't find the container with id f02f7e100e251060c54156f4f1beac07154b4cae59d3669639dcb3b98dca6124 Mar 13 12:41:33.004980 master-0 kubenswrapper[7518]: I0313 12:41:33.000785 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" event={"ID":"b12a6f33-70df-4832-ac3b-0d2b94125fbf","Type":"ContainerStarted","Data":"f02f7e100e251060c54156f4f1beac07154b4cae59d3669639dcb3b98dca6124"} Mar 13 12:41:33.004980 master-0 kubenswrapper[7518]: I0313 12:41:33.003195 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" event={"ID":"00d8a21b-701c-4334-9dda-34c28b417f42","Type":"ContainerStarted","Data":"f7bdd6f14cd7d876f03cc0e565ef27ecd2cd6f1309a345b7b4c1e4b2f6e38eb4"} Mar 13 12:41:34.063207 master-0 kubenswrapper[7518]: I0313 12:41:34.063132 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" event={"ID":"b12a6f33-70df-4832-ac3b-0d2b94125fbf","Type":"ContainerStarted","Data":"dc0b4d1c3abe817c78af7fe9e0403d40d151ed1545952784963a88e4b8739dc8"} Mar 13 12:41:35.080865 master-0 kubenswrapper[7518]: I0313 12:41:35.080778 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" event={"ID":"b12a6f33-70df-4832-ac3b-0d2b94125fbf","Type":"ContainerStarted","Data":"bf350ea0de070f0fd26919325b63ec00154a2596f691d915b23dc9183ce79b89"} Mar 13 12:41:35.083725 master-0 kubenswrapper[7518]: I0313 12:41:35.083677 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" event={"ID":"00d8a21b-701c-4334-9dda-34c28b417f42","Type":"ContainerStarted","Data":"6f707b67cc62d4814af6503938c996dcae8befee02aba45c7a6ddc49cca77492"} Mar 13 12:41:36.855322 master-0 kubenswrapper[7518]: I0313 12:41:36.855226 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" podStartSLOduration=6.85518058 podStartE2EDuration="6.85518058s" podCreationTimestamp="2026-03-13 12:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:41:36.851470456 +0000 UTC m=+251.484539653" watchObservedRunningTime="2026-03-13 12:41:36.85518058 +0000 UTC m=+251.488249777" Mar 13 12:41:38.052293 master-0 kubenswrapper[7518]: I0313 12:41:38.051123 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" podStartSLOduration=8.051102935 podStartE2EDuration="8.051102935s" podCreationTimestamp="2026-03-13 12:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:41:38.043731329 +0000 UTC m=+252.676800526" watchObservedRunningTime="2026-03-13 12:41:38.051102935 +0000 UTC m=+252.684172122" Mar 13 12:41:46.695322 master-0 kubenswrapper[7518]: I0313 12:41:46.695286 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-tc4ht_d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a/authentication-operator/0.log" Mar 13 12:41:46.742420 master-0 kubenswrapper[7518]: I0313 12:41:46.740960 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv"] Mar 13 12:41:46.742420 master-0 kubenswrapper[7518]: I0313 12:41:46.741903 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:41:46.745428 master-0 kubenswrapper[7518]: I0313 12:41:46.744041 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-vj5mr" Mar 13 12:41:46.746052 master-0 kubenswrapper[7518]: I0313 12:41:46.746020 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 12:41:46.764773 master-0 kubenswrapper[7518]: I0313 12:41:46.764667 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv"] Mar 13 12:41:46.836568 master-0 kubenswrapper[7518]: I0313 12:41:46.836504 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-tc4ht_d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a/authentication-operator/1.log" Mar 13 12:41:46.841292 master-0 kubenswrapper[7518]: I0313 12:41:46.841250 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gcb\" (UniqueName: \"kubernetes.io/projected/e25bef76-7020-4f86-8dee-a58ebed537d2-kube-api-access-r8gcb\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:41:46.841395 master-0 kubenswrapper[7518]: I0313 12:41:46.841310 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e25bef76-7020-4f86-8dee-a58ebed537d2-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:41:46.841395 master-0 kubenswrapper[7518]: I0313 12:41:46.841366 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e25bef76-7020-4f86-8dee-a58ebed537d2-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:41:46.942462 master-0 kubenswrapper[7518]: I0313 12:41:46.942404 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8gcb\" (UniqueName: \"kubernetes.io/projected/e25bef76-7020-4f86-8dee-a58ebed537d2-kube-api-access-r8gcb\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:41:46.942462 master-0 kubenswrapper[7518]: I0313 12:41:46.942465 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e25bef76-7020-4f86-8dee-a58ebed537d2-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:41:46.942694 master-0 kubenswrapper[7518]: I0313 12:41:46.942492 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e25bef76-7020-4f86-8dee-a58ebed537d2-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:41:46.943501 master-0 kubenswrapper[7518]: I0313 12:41:46.943457 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e25bef76-7020-4f86-8dee-a58ebed537d2-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:41:46.949701 master-0 kubenswrapper[7518]: I0313 12:41:46.949665 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e25bef76-7020-4f86-8dee-a58ebed537d2-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:41:46.962857 master-0 kubenswrapper[7518]: I0313 12:41:46.962218 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8gcb\" (UniqueName: \"kubernetes.io/projected/e25bef76-7020-4f86-8dee-a58ebed537d2-kube-api-access-r8gcb\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:41:47.167702 master-0 kubenswrapper[7518]: I0313 12:41:47.167248 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:41:47.219768 master-0 kubenswrapper[7518]: I0313 12:41:47.219721 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-787dbf5bb9-5645n_c4477be6-bcff-407a-8033-b005e19bf5d6/fix-audit-permissions/0.log" Mar 13 12:41:47.354245 master-0 kubenswrapper[7518]: I0313 12:41:47.353753 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9vk" event={"ID":"4f9e6618-62b5-4181-b545-211461811140","Type":"ContainerStarted","Data":"9062806f1fd7e57e5ebf97f5b666c4fc035d390e813f9b35e04e9053adfc3f6f"} Mar 13 12:41:47.368203 master-0 kubenswrapper[7518]: I0313 12:41:47.368126 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh888" event={"ID":"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71","Type":"ContainerStarted","Data":"00809c4ee5d726c34bf2d02efc2a030a2872a784609c5eff8c026449ec31a4e3"} Mar 13 12:41:47.376167 master-0 kubenswrapper[7518]: I0313 12:41:47.375489 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5czx2" event={"ID":"32fe77f9-082d-491c-b3d0-9c10feaf4a8e","Type":"ContainerStarted","Data":"80d53224cdb07f7df330de943b1b7ce728c8bcbc75bc0ba2ba3eafd3ec3222e4"} Mar 13 12:41:47.380164 master-0 kubenswrapper[7518]: I0313 12:41:47.379195 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9csk" event={"ID":"1cf388b6-e4a7-41db-a350-1b503214efd3","Type":"ContainerStarted","Data":"e21721716954cdfb41ff707242680d7299b897aceb6a1df7497618d1ec86ac9f"} Mar 13 12:41:47.465290 master-0 kubenswrapper[7518]: I0313 12:41:47.464604 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9x9vk" podStartSLOduration=4.089821384 podStartE2EDuration="1m11.464583887s" podCreationTimestamp="2026-03-13 12:40:36 +0000 UTC" firstStartedPulling="2026-03-13 12:40:39.04505396 +0000 UTC m=+193.678123137" lastFinishedPulling="2026-03-13 12:41:46.419816453 +0000 UTC m=+261.052885640" observedRunningTime="2026-03-13 12:41:47.463941781 +0000 UTC m=+262.097010968" watchObservedRunningTime="2026-03-13 12:41:47.464583887 +0000 UTC m=+262.097653084" Mar 13 12:41:47.813175 master-0 kubenswrapper[7518]: I0313 12:41:47.808029 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zh888" podStartSLOduration=3.488679035 podStartE2EDuration="1m10.80800596s" podCreationTimestamp="2026-03-13 12:40:37 +0000 UTC" firstStartedPulling="2026-03-13 12:40:39.04887451 +0000 UTC m=+193.681943697" lastFinishedPulling="2026-03-13 12:41:46.368201445 +0000 UTC m=+261.001270622" observedRunningTime="2026-03-13 12:41:47.807926768 +0000 UTC m=+262.440995975" watchObservedRunningTime="2026-03-13 12:41:47.80800596 +0000 UTC m=+262.441075147" Mar 13 12:41:47.829188 master-0 kubenswrapper[7518]: I0313 12:41:47.825872 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-787dbf5bb9-5645n_c4477be6-bcff-407a-8033-b005e19bf5d6/oauth-apiserver/0.log" Mar 13 12:41:48.105181 master-0 kubenswrapper[7518]: I0313 12:41:48.100866 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-sqndx_d44112d1-b2a5-4b8d-b74d-1e91638508d5/kube-rbac-proxy/0.log" Mar 13 12:41:48.113165 master-0 kubenswrapper[7518]: I0313 12:41:48.112865 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-sqndx_d44112d1-b2a5-4b8d-b74d-1e91638508d5/cluster-autoscaler-operator/0.log" Mar 13 12:41:48.117159 master-0 kubenswrapper[7518]: I0313 12:41:48.115763 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p9csk" podStartSLOduration=4.469887517 podStartE2EDuration="1m12.115744368s" podCreationTimestamp="2026-03-13 12:40:36 +0000 UTC" firstStartedPulling="2026-03-13 12:40:39.051999997 +0000 UTC m=+193.685069184" lastFinishedPulling="2026-03-13 12:41:46.697856848 +0000 UTC m=+261.330926035" observedRunningTime="2026-03-13 12:41:48.088754653 +0000 UTC m=+262.721823840" watchObservedRunningTime="2026-03-13 12:41:48.115744368 +0000 UTC m=+262.748813565" Mar 13 12:41:48.123004 master-0 kubenswrapper[7518]: I0313 12:41:48.117598 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:41:48.123004 master-0 kubenswrapper[7518]: I0313 12:41:48.117658 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:41:48.123004 master-0 kubenswrapper[7518]: I0313 12:41:48.117682 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:41:48.123004 master-0 kubenswrapper[7518]: I0313 12:41:48.117698 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:41:48.123004 master-0 kubenswrapper[7518]: I0313 12:41:48.117717 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:41:48.123004 master-0 kubenswrapper[7518]: I0313 12:41:48.117730 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:41:48.123004 master-0 kubenswrapper[7518]: I0313 12:41:48.122515 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5czx2" podStartSLOduration=3.797858588 podStartE2EDuration="1m10.122489968s" podCreationTimestamp="2026-03-13 12:40:38 +0000 UTC" firstStartedPulling="2026-03-13 12:40:40.062043384 +0000 UTC m=+194.695112581" lastFinishedPulling="2026-03-13 12:41:46.386674784 +0000 UTC m=+261.019743961" observedRunningTime="2026-03-13 12:41:48.117725428 +0000 UTC m=+262.750794645" watchObservedRunningTime="2026-03-13 12:41:48.122489968 +0000 UTC m=+262.755559155" Mar 13 12:41:48.128173 master-0 kubenswrapper[7518]: I0313 12:41:48.128077 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-l6jp5_317af639-269e-4163-8e24-fcea468b9352/cluster-baremetal-operator/0.log" Mar 13 12:41:48.155164 master-0 kubenswrapper[7518]: I0313 12:41:48.149242 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv"] Mar 13 12:41:48.165357 master-0 kubenswrapper[7518]: W0313 12:41:48.165292 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode25bef76_7020_4f86_8dee_a58ebed537d2.slice/crio-e631d83a1a86fd29ec9a08d7d593e19783f91c18b20dce846f07ab60e82a0c6e WatchSource:0}: Error finding container e631d83a1a86fd29ec9a08d7d593e19783f91c18b20dce846f07ab60e82a0c6e: Status 404 returned error can't find the container with id e631d83a1a86fd29ec9a08d7d593e19783f91c18b20dce846f07ab60e82a0c6e Mar 13 12:41:48.220059 master-0 kubenswrapper[7518]: I0313 12:41:48.219998 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-l6jp5_317af639-269e-4163-8e24-fcea468b9352/baremetal-kube-rbac-proxy/0.log" Mar 13 12:41:48.448170 master-0 kubenswrapper[7518]: I0313 12:41:48.444689 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-btz8w_747659a6-4a1e-43ed-bb8e-36da6e63b5a1/control-plane-machine-set-operator/0.log" Mar 13 12:41:48.448170 master-0 kubenswrapper[7518]: I0313 12:41:48.444777 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" event={"ID":"e25bef76-7020-4f86-8dee-a58ebed537d2","Type":"ContainerStarted","Data":"e631d83a1a86fd29ec9a08d7d593e19783f91c18b20dce846f07ab60e82a0c6e"} Mar 13 12:41:48.616546 master-0 kubenswrapper[7518]: I0313 12:41:48.616503 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-mjxcz_d5f63b6b-990a-444b-a954-d718036f2f6c/kube-rbac-proxy/0.log" Mar 13 12:41:48.831365 master-0 kubenswrapper[7518]: I0313 12:41:48.831300 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-79f8cd6fdd-wtf6j"] Mar 13 12:41:48.832200 master-0 kubenswrapper[7518]: I0313 12:41:48.832177 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:48.834876 master-0 kubenswrapper[7518]: I0313 12:41:48.834833 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf"] Mar 13 12:41:48.835585 master-0 kubenswrapper[7518]: I0313 12:41:48.835563 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf" Mar 13 12:41:48.836504 master-0 kubenswrapper[7518]: I0313 12:41:48.836459 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-5bb88"] Mar 13 12:41:48.836916 master-0 kubenswrapper[7518]: I0313 12:41:48.836895 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 12:41:48.837090 master-0 kubenswrapper[7518]: I0313 12:41:48.837072 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 12:41:48.837308 master-0 kubenswrapper[7518]: I0313 12:41:48.837289 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 12:41:48.837749 master-0 kubenswrapper[7518]: I0313 12:41:48.837727 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 12:41:48.838400 master-0 kubenswrapper[7518]: I0313 12:41:48.838380 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-mjxcz_d5f63b6b-990a-444b-a954-d718036f2f6c/machine-api-operator/0.log" Mar 13 12:41:48.838690 master-0 kubenswrapper[7518]: I0313 12:41:48.838666 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 12:41:48.838813 master-0 kubenswrapper[7518]: I0313 12:41:48.838794 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 12:41:48.838863 master-0 kubenswrapper[7518]: I0313 12:41:48.838815 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 13 12:41:48.839165 master-0 kubenswrapper[7518]: I0313 12:41:48.839146 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-lkhsh" Mar 13 12:41:48.843204 master-0 kubenswrapper[7518]: I0313 12:41:48.843101 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-5bb88" Mar 13 12:41:48.860850 master-0 kubenswrapper[7518]: I0313 12:41:48.860817 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-5bb88"] Mar 13 12:41:48.869256 master-0 kubenswrapper[7518]: I0313 12:41:48.869214 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf"] Mar 13 12:41:48.879733 master-0 kubenswrapper[7518]: I0313 12:41:48.879684 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-h8skx"] Mar 13 12:41:48.884754 master-0 kubenswrapper[7518]: I0313 12:41:48.881207 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h8skx" Mar 13 12:41:48.884754 master-0 kubenswrapper[7518]: I0313 12:41:48.884478 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 12:41:48.884754 master-0 kubenswrapper[7518]: I0313 12:41:48.884478 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 12:41:48.884754 master-0 kubenswrapper[7518]: I0313 12:41:48.884545 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-lz982" Mar 13 12:41:48.884754 master-0 kubenswrapper[7518]: I0313 12:41:48.884631 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 12:41:48.895520 master-0 kubenswrapper[7518]: I0313 12:41:48.895486 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h8skx"] Mar 13 12:41:49.016166 master-0 kubenswrapper[7518]: I0313 12:41:49.016049 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/866b0545-e232-4c80-9fb6-549d313ac3fc-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-pmzkf\" (UID: \"866b0545-e232-4c80-9fb6-549d313ac3fc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf" Mar 13 12:41:49.016166 master-0 kubenswrapper[7518]: I0313 12:41:49.016088 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-default-certificate\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.016166 master-0 kubenswrapper[7518]: I0313 12:41:49.016116 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-metrics-certs\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.016166 master-0 kubenswrapper[7518]: I0313 12:41:49.016131 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tll9d\" (UniqueName: \"kubernetes.io/projected/45925a5e-41ae-4c19-b586-3151c7677612-kube-api-access-tll9d\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.016166 master-0 kubenswrapper[7518]: I0313 12:41:49.016171 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd6q6\" (UniqueName: \"kubernetes.io/projected/81f8a7d8-b6a2-4522-91d3-bb524997ed0a-kube-api-access-gd6q6\") pod \"ingress-canary-h8skx\" (UID: \"81f8a7d8-b6a2-4522-91d3-bb524997ed0a\") " pod="openshift-ingress-canary/ingress-canary-h8skx" Mar 13 12:41:49.016556 master-0 kubenswrapper[7518]: I0313 12:41:49.016228 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45925a5e-41ae-4c19-b586-3151c7677612-service-ca-bundle\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.016556 master-0 kubenswrapper[7518]: I0313 12:41:49.016249 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-stats-auth\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.016556 master-0 kubenswrapper[7518]: I0313 12:41:49.016268 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mmbc\" (UniqueName: \"kubernetes.io/projected/6a42098e-4633-456f-ace7-bd3ee3bb6707-kube-api-access-7mmbc\") pod \"network-check-source-7c67b67d47-5bb88\" (UID: \"6a42098e-4633-456f-ace7-bd3ee3bb6707\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-5bb88" Mar 13 12:41:49.016556 master-0 kubenswrapper[7518]: I0313 12:41:49.016314 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81f8a7d8-b6a2-4522-91d3-bb524997ed0a-cert\") pod \"ingress-canary-h8skx\" (UID: \"81f8a7d8-b6a2-4522-91d3-bb524997ed0a\") " pod="openshift-ingress-canary/ingress-canary-h8skx" Mar 13 12:41:49.023010 master-0 kubenswrapper[7518]: I0313 12:41:49.022969 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-hjzms_15b592d6-3c48-45d4-9172-d28632ae8995/etcd-operator/0.log" Mar 13 12:41:49.106283 master-0 kubenswrapper[7518]: I0313 12:41:49.105699 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9x9vk" podUID="4f9e6618-62b5-4181-b545-211461811140" containerName="registry-server" probeResult="failure" output=< Mar 13 12:41:49.106283 master-0 kubenswrapper[7518]: timeout: failed to connect service ":50051" within 1s Mar 13 12:41:49.106283 master-0 kubenswrapper[7518]: > Mar 13 12:41:49.117077 master-0 kubenswrapper[7518]: I0313 12:41:49.117009 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81f8a7d8-b6a2-4522-91d3-bb524997ed0a-cert\") pod \"ingress-canary-h8skx\" (UID: \"81f8a7d8-b6a2-4522-91d3-bb524997ed0a\") " pod="openshift-ingress-canary/ingress-canary-h8skx" Mar 13 12:41:49.117285 master-0 kubenswrapper[7518]: I0313 12:41:49.117105 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/866b0545-e232-4c80-9fb6-549d313ac3fc-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-pmzkf\" (UID: \"866b0545-e232-4c80-9fb6-549d313ac3fc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf" Mar 13 12:41:49.117285 master-0 kubenswrapper[7518]: I0313 12:41:49.117129 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-default-certificate\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.117285 master-0 kubenswrapper[7518]: I0313 12:41:49.117191 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-metrics-certs\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.117285 master-0 kubenswrapper[7518]: I0313 12:41:49.117210 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tll9d\" (UniqueName: \"kubernetes.io/projected/45925a5e-41ae-4c19-b586-3151c7677612-kube-api-access-tll9d\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.117285 master-0 kubenswrapper[7518]: I0313 12:41:49.117234 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd6q6\" (UniqueName: \"kubernetes.io/projected/81f8a7d8-b6a2-4522-91d3-bb524997ed0a-kube-api-access-gd6q6\") pod \"ingress-canary-h8skx\" (UID: \"81f8a7d8-b6a2-4522-91d3-bb524997ed0a\") " pod="openshift-ingress-canary/ingress-canary-h8skx" Mar 13 12:41:49.117285 master-0 kubenswrapper[7518]: I0313 12:41:49.117264 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45925a5e-41ae-4c19-b586-3151c7677612-service-ca-bundle\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.117474 master-0 kubenswrapper[7518]: I0313 12:41:49.117294 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-stats-auth\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.117474 master-0 kubenswrapper[7518]: I0313 12:41:49.117314 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mmbc\" (UniqueName: \"kubernetes.io/projected/6a42098e-4633-456f-ace7-bd3ee3bb6707-kube-api-access-7mmbc\") pod \"network-check-source-7c67b67d47-5bb88\" (UID: \"6a42098e-4633-456f-ace7-bd3ee3bb6707\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-5bb88" Mar 13 12:41:49.118705 master-0 kubenswrapper[7518]: I0313 12:41:49.118664 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45925a5e-41ae-4c19-b586-3151c7677612-service-ca-bundle\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.120856 master-0 kubenswrapper[7518]: I0313 12:41:49.120809 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81f8a7d8-b6a2-4522-91d3-bb524997ed0a-cert\") pod \"ingress-canary-h8skx\" (UID: \"81f8a7d8-b6a2-4522-91d3-bb524997ed0a\") " pod="openshift-ingress-canary/ingress-canary-h8skx" Mar 13 12:41:49.124520 master-0 kubenswrapper[7518]: I0313 12:41:49.124464 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/866b0545-e232-4c80-9fb6-549d313ac3fc-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-pmzkf\" (UID: \"866b0545-e232-4c80-9fb6-549d313ac3fc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf" Mar 13 12:41:49.125029 master-0 kubenswrapper[7518]: I0313 12:41:49.124999 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-stats-auth\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.125091 master-0 kubenswrapper[7518]: I0313 12:41:49.125073 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-default-certificate\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.125586 master-0 kubenswrapper[7518]: I0313 12:41:49.125551 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-metrics-certs\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.137850 master-0 kubenswrapper[7518]: I0313 12:41:49.137787 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zh888" podUID="e0ce4c51-2b9f-410f-93e5-9c2ff718dd71" containerName="registry-server" probeResult="failure" output=< Mar 13 12:41:49.137850 master-0 kubenswrapper[7518]: timeout: failed to connect service ":50051" within 1s Mar 13 12:41:49.137850 master-0 kubenswrapper[7518]: > Mar 13 12:41:49.169319 master-0 kubenswrapper[7518]: I0313 12:41:49.168453 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-p9csk" podUID="1cf388b6-e4a7-41db-a350-1b503214efd3" containerName="registry-server" probeResult="failure" output=< Mar 13 12:41:49.169319 master-0 kubenswrapper[7518]: timeout: failed to connect service ":50051" within 1s Mar 13 12:41:49.169319 master-0 kubenswrapper[7518]: > Mar 13 12:41:49.169588 master-0 kubenswrapper[7518]: I0313 12:41:49.169392 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf" Mar 13 12:41:49.173897 master-0 kubenswrapper[7518]: I0313 12:41:49.171379 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tll9d\" (UniqueName: \"kubernetes.io/projected/45925a5e-41ae-4c19-b586-3151c7677612-kube-api-access-tll9d\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.173897 master-0 kubenswrapper[7518]: I0313 12:41:49.172126 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd6q6\" (UniqueName: \"kubernetes.io/projected/81f8a7d8-b6a2-4522-91d3-bb524997ed0a-kube-api-access-gd6q6\") pod \"ingress-canary-h8skx\" (UID: \"81f8a7d8-b6a2-4522-91d3-bb524997ed0a\") " pod="openshift-ingress-canary/ingress-canary-h8skx" Mar 13 12:41:49.176094 master-0 kubenswrapper[7518]: I0313 12:41:49.174679 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mmbc\" (UniqueName: \"kubernetes.io/projected/6a42098e-4633-456f-ace7-bd3ee3bb6707-kube-api-access-7mmbc\") pod \"network-check-source-7c67b67d47-5bb88\" (UID: \"6a42098e-4633-456f-ace7-bd3ee3bb6707\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-5bb88" Mar 13 12:41:49.181449 master-0 kubenswrapper[7518]: I0313 12:41:49.181397 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-5bb88" Mar 13 12:41:49.193467 master-0 kubenswrapper[7518]: I0313 12:41:49.191549 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:41:49.193467 master-0 kubenswrapper[7518]: I0313 12:41:49.191591 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:41:49.290122 master-0 kubenswrapper[7518]: I0313 12:41:49.290065 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h8skx" Mar 13 12:41:49.538442 master-0 kubenswrapper[7518]: I0313 12:41:49.538366 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:49.549576 master-0 kubenswrapper[7518]: I0313 12:41:49.549530 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-hjzms_15b592d6-3c48-45d4-9172-d28632ae8995/etcd-operator/1.log" Mar 13 12:41:49.556986 master-0 kubenswrapper[7518]: I0313 12:41:49.556681 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" event={"ID":"e25bef76-7020-4f86-8dee-a58ebed537d2","Type":"ContainerStarted","Data":"fefc52314f557d7c60fa165574ebac10c9ccc912b863ad03ae108b2ab17e6e90"} Mar 13 12:41:49.556986 master-0 kubenswrapper[7518]: I0313 12:41:49.556949 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" event={"ID":"e25bef76-7020-4f86-8dee-a58ebed537d2","Type":"ContainerStarted","Data":"04955d1b4bcf270108cdf9a4283cc8df8e00452f3d53e4292977405c84470cbe"} Mar 13 12:41:49.565818 master-0 kubenswrapper[7518]: I0313 12:41:49.565774 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/setup/0.log" Mar 13 12:41:49.700218 master-0 kubenswrapper[7518]: I0313 12:41:49.700172 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" podStartSLOduration=3.700157506 podStartE2EDuration="3.700157506s" podCreationTimestamp="2026-03-13 12:41:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:41:49.695428766 +0000 UTC m=+264.328497953" watchObservedRunningTime="2026-03-13 12:41:49.700157506 +0000 UTC m=+264.333226693" Mar 13 12:41:49.721970 master-0 kubenswrapper[7518]: I0313 12:41:49.715922 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-ensure-env-vars/0.log" Mar 13 12:41:49.821164 master-0 kubenswrapper[7518]: I0313 12:41:49.821027 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-5bb88"] Mar 13 12:41:49.823495 master-0 kubenswrapper[7518]: I0313 12:41:49.823464 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-resources-copy/0.log" Mar 13 12:41:49.849131 master-0 kubenswrapper[7518]: I0313 12:41:49.849083 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf"] Mar 13 12:41:49.862503 master-0 kubenswrapper[7518]: W0313 12:41:49.856972 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod866b0545_e232_4c80_9fb6_549d313ac3fc.slice/crio-f7b194f18885cd869cc30349fb7d97bcdda7984dea9fb20d14a3e9436a39dc13 WatchSource:0}: Error finding container f7b194f18885cd869cc30349fb7d97bcdda7984dea9fb20d14a3e9436a39dc13: Status 404 returned error can't find the container with id f7b194f18885cd869cc30349fb7d97bcdda7984dea9fb20d14a3e9436a39dc13 Mar 13 12:41:50.009960 master-0 kubenswrapper[7518]: I0313 12:41:50.009913 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h8skx"] Mar 13 12:41:50.015670 master-0 kubenswrapper[7518]: W0313 12:41:50.015473 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81f8a7d8_b6a2_4522_91d3_bb524997ed0a.slice/crio-11e56b22c0ca61c66515f175bbe9f8fe67513a2c89d80968a1d368bbdad873da WatchSource:0}: Error finding container 11e56b22c0ca61c66515f175bbe9f8fe67513a2c89d80968a1d368bbdad873da: Status 404 returned error can't find the container with id 11e56b22c0ca61c66515f175bbe9f8fe67513a2c89d80968a1d368bbdad873da Mar 13 12:41:50.018957 master-0 kubenswrapper[7518]: I0313 12:41:50.018918 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 13 12:41:50.221712 master-0 kubenswrapper[7518]: I0313 12:41:50.221678 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 13 12:41:50.248858 master-0 kubenswrapper[7518]: I0313 12:41:50.248762 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5czx2" podUID="32fe77f9-082d-491c-b3d0-9c10feaf4a8e" containerName="registry-server" probeResult="failure" output=< Mar 13 12:41:50.248858 master-0 kubenswrapper[7518]: timeout: failed to connect service ":50051" within 1s Mar 13 12:41:50.248858 master-0 kubenswrapper[7518]: > Mar 13 12:41:50.420218 master-0 kubenswrapper[7518]: I0313 12:41:50.419172 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 12:41:50.574168 master-0 kubenswrapper[7518]: I0313 12:41:50.568560 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-5bb88" event={"ID":"6a42098e-4633-456f-ace7-bd3ee3bb6707","Type":"ContainerStarted","Data":"130ce28e7a1c6f97badf983a5541aff1537d141d0820ead1e9ae47d0a285bae6"} Mar 13 12:41:50.574168 master-0 kubenswrapper[7518]: I0313 12:41:50.568604 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-5bb88" event={"ID":"6a42098e-4633-456f-ace7-bd3ee3bb6707","Type":"ContainerStarted","Data":"a3ffdbf0e263655894f67c3d77b8923c8263311f04a159ccc83606c42c70fddb"} Mar 13 12:41:50.574168 master-0 kubenswrapper[7518]: I0313 12:41:50.571289 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h8skx" event={"ID":"81f8a7d8-b6a2-4522-91d3-bb524997ed0a","Type":"ContainerStarted","Data":"65c5d50a5217b0dc2d6720c21dbff39790aa6ad53d024f5943cc42d9494d61a1"} Mar 13 12:41:50.574168 master-0 kubenswrapper[7518]: I0313 12:41:50.571317 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h8skx" event={"ID":"81f8a7d8-b6a2-4522-91d3-bb524997ed0a","Type":"ContainerStarted","Data":"11e56b22c0ca61c66515f175bbe9f8fe67513a2c89d80968a1d368bbdad873da"} Mar 13 12:41:50.582209 master-0 kubenswrapper[7518]: I0313 12:41:50.574742 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" event={"ID":"45925a5e-41ae-4c19-b586-3151c7677612","Type":"ContainerStarted","Data":"259b8c4f70e310b1a2310215be2034d29d1f6b96a9b3aac30e2098e024daf661"} Mar 13 12:41:50.582209 master-0 kubenswrapper[7518]: I0313 12:41:50.576288 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf" event={"ID":"866b0545-e232-4c80-9fb6-549d313ac3fc","Type":"ContainerStarted","Data":"f7b194f18885cd869cc30349fb7d97bcdda7984dea9fb20d14a3e9436a39dc13"} Mar 13 12:41:50.651083 master-0 kubenswrapper[7518]: I0313 12:41:50.650993 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-readyz/0.log" Mar 13 12:41:50.716807 master-0 kubenswrapper[7518]: I0313 12:41:50.716635 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-h8skx" podStartSLOduration=2.716613413 podStartE2EDuration="2.716613413s" podCreationTimestamp="2026-03-13 12:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:41:50.71451185 +0000 UTC m=+265.347581047" watchObservedRunningTime="2026-03-13 12:41:50.716613413 +0000 UTC m=+265.349682610" Mar 13 12:41:50.717252 master-0 kubenswrapper[7518]: I0313 12:41:50.716957 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-5bb88" podStartSLOduration=319.716951371 podStartE2EDuration="5m19.716951371s" podCreationTimestamp="2026-03-13 12:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:41:50.630734367 +0000 UTC m=+265.263803554" watchObservedRunningTime="2026-03-13 12:41:50.716951371 +0000 UTC m=+265.350020558" Mar 13 12:41:50.824481 master-0 kubenswrapper[7518]: I0313 12:41:50.824346 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 12:41:50.844965 master-0 kubenswrapper[7518]: I0313 12:41:50.844913 7518 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 12:41:51.014107 master-0 kubenswrapper[7518]: I0313 12:41:51.014000 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_00d2e134-62bb-4181-aa0a-22c9b9755b10/installer/0.log" Mar 13 12:41:51.216394 master-0 kubenswrapper[7518]: I0313 12:41:51.216340 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-qxmnf_ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118/kube-apiserver-operator/0.log" Mar 13 12:41:51.650462 master-0 kubenswrapper[7518]: I0313 12:41:51.649059 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-qxmnf_ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118/kube-apiserver-operator/1.log" Mar 13 12:41:51.657636 master-0 kubenswrapper[7518]: I0313 12:41:51.657586 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/setup/0.log" Mar 13 12:41:51.818220 master-0 kubenswrapper[7518]: I0313 12:41:51.817739 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/kube-apiserver/0.log" Mar 13 12:41:52.021584 master-0 kubenswrapper[7518]: I0313 12:41:52.021165 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/kube-apiserver-insecure-readyz/0.log" Mar 13 12:41:52.219868 master-0 kubenswrapper[7518]: I0313 12:41:52.219827 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_88bf0bf8-c0ee-454e-8d8b-592a6e796cfc/installer/0.log" Mar 13 12:41:52.416382 master-0 kubenswrapper[7518]: I0313 12:41:52.416277 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_3828446d-a3e3-412f-a0e7-7347b5de523a/installer/0.log" Mar 13 12:41:52.621446 master-0 kubenswrapper[7518]: I0313 12:41:52.621391 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-br96g_77ef7e49-eb85-4f5e-94d3-a6a8619a6243/kube-controller-manager-operator/0.log" Mar 13 12:41:52.816594 master-0 kubenswrapper[7518]: I0313 12:41:52.816542 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-br96g_77ef7e49-eb85-4f5e-94d3-a6a8619a6243/kube-controller-manager-operator/1.log" Mar 13 12:41:53.018772 master-0 kubenswrapper[7518]: I0313 12:41:53.017612 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/kube-controller-manager/2.log" Mar 13 12:41:53.313288 master-0 kubenswrapper[7518]: I0313 12:41:53.311498 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-6crtf"] Mar 13 12:41:53.314910 master-0 kubenswrapper[7518]: I0313 12:41:53.314884 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:41:53.317521 master-0 kubenswrapper[7518]: I0313 12:41:53.317494 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 12:41:53.317851 master-0 kubenswrapper[7518]: I0313 12:41:53.317831 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-tqljd" Mar 13 12:41:53.318091 master-0 kubenswrapper[7518]: I0313 12:41:53.317863 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 12:41:53.415748 master-0 kubenswrapper[7518]: I0313 12:41:53.415663 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/kube-controller-manager/3.log" Mar 13 12:41:53.485529 master-0 kubenswrapper[7518]: I0313 12:41:53.485455 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz927\" (UniqueName: \"kubernetes.io/projected/081a08d6-a4fd-412c-81c3-1364c36f0f15-kube-api-access-mz927\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:41:53.485824 master-0 kubenswrapper[7518]: I0313 12:41:53.485587 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-certs\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:41:53.485824 master-0 kubenswrapper[7518]: I0313 12:41:53.485634 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-node-bootstrap-token\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:41:53.586958 master-0 kubenswrapper[7518]: I0313 12:41:53.586888 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-node-bootstrap-token\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:41:53.587317 master-0 kubenswrapper[7518]: I0313 12:41:53.586999 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz927\" (UniqueName: \"kubernetes.io/projected/081a08d6-a4fd-412c-81c3-1364c36f0f15-kube-api-access-mz927\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:41:53.587317 master-0 kubenswrapper[7518]: I0313 12:41:53.587043 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-certs\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:41:53.593398 master-0 kubenswrapper[7518]: I0313 12:41:53.593338 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-certs\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:41:53.593717 master-0 kubenswrapper[7518]: I0313 12:41:53.593610 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-node-bootstrap-token\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:41:53.607071 master-0 kubenswrapper[7518]: I0313 12:41:53.607000 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz927\" (UniqueName: \"kubernetes.io/projected/081a08d6-a4fd-412c-81c3-1364c36f0f15-kube-api-access-mz927\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:41:53.607315 master-0 kubenswrapper[7518]: I0313 12:41:53.607229 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" event={"ID":"45925a5e-41ae-4c19-b586-3151c7677612","Type":"ContainerStarted","Data":"825d71b79346e6c336f0a44e80a86fbf2296a449b4aa734881eff9c8477a662b"} Mar 13 12:41:53.608944 master-0 kubenswrapper[7518]: I0313 12:41:53.608901 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf" event={"ID":"866b0545-e232-4c80-9fb6-549d313ac3fc","Type":"ContainerStarted","Data":"b4b3fff6dfcc52b6ac148952fb8ed83ecbfbec2ffb070ee2baadd95fed7e0191"} Mar 13 12:41:53.609844 master-0 kubenswrapper[7518]: I0313 12:41:53.609813 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf" Mar 13 12:41:53.618823 master-0 kubenswrapper[7518]: I0313 12:41:53.618749 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf" Mar 13 12:41:53.619749 master-0 kubenswrapper[7518]: I0313 12:41:53.619703 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/cluster-policy-controller/0.log" Mar 13 12:41:53.639250 master-0 kubenswrapper[7518]: I0313 12:41:53.639167 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:41:53.705340 master-0 kubenswrapper[7518]: I0313 12:41:53.703803 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podStartSLOduration=215.125601777 podStartE2EDuration="3m38.703783357s" podCreationTimestamp="2026-03-13 12:38:15 +0000 UTC" firstStartedPulling="2026-03-13 12:41:49.569715111 +0000 UTC m=+264.202784308" lastFinishedPulling="2026-03-13 12:41:53.147896701 +0000 UTC m=+267.780965888" observedRunningTime="2026-03-13 12:41:53.701869678 +0000 UTC m=+268.334938885" watchObservedRunningTime="2026-03-13 12:41:53.703783357 +0000 UTC m=+268.336852544" Mar 13 12:41:53.728206 master-0 kubenswrapper[7518]: I0313 12:41:53.728116 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf" podStartSLOduration=65.447888214 podStartE2EDuration="1m8.728098213s" podCreationTimestamp="2026-03-13 12:40:45 +0000 UTC" firstStartedPulling="2026-03-13 12:41:49.86603342 +0000 UTC m=+264.499102607" lastFinishedPulling="2026-03-13 12:41:53.146243419 +0000 UTC m=+267.779312606" observedRunningTime="2026-03-13 12:41:53.724365548 +0000 UTC m=+268.357434735" watchObservedRunningTime="2026-03-13 12:41:53.728098213 +0000 UTC m=+268.361167390" Mar 13 12:41:53.819301 master-0 kubenswrapper[7518]: I0313 12:41:53.819259 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_a1a56802af72ce1aac6b5077f1695ac0/kube-scheduler/0.log" Mar 13 12:41:54.017733 master-0 kubenswrapper[7518]: I0313 12:41:54.017677 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_a1a56802af72ce1aac6b5077f1695ac0/kube-scheduler/1.log" Mar 13 12:41:54.216030 master-0 kubenswrapper[7518]: I0313 12:41:54.215988 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_bfabb495-1707-4c3d-b00e-2f3b2976fb92/installer/0.log" Mar 13 12:41:54.725968 master-0 kubenswrapper[7518]: I0313 12:41:54.725904 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:54.731914 master-0 kubenswrapper[7518]: I0313 12:41:54.731852 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:41:54.731914 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:41:54.731914 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:41:54.731914 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:41:54.732372 master-0 kubenswrapper[7518]: I0313 12:41:54.731927 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:41:54.741971 master-0 kubenswrapper[7518]: I0313 12:41:54.741902 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6crtf" event={"ID":"081a08d6-a4fd-412c-81c3-1364c36f0f15","Type":"ContainerStarted","Data":"6d7203e8d70e365610b72a46e10cfbb876ae6cf1abaf08c99aa1be23946633d8"} Mar 13 12:41:54.741971 master-0 kubenswrapper[7518]: I0313 12:41:54.741980 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6crtf" event={"ID":"081a08d6-a4fd-412c-81c3-1364c36f0f15","Type":"ContainerStarted","Data":"3f6f1ed4b9428b71641a87701412cc5bbb34559ce861fd12caebd021e4bfc58b"} Mar 13 12:41:54.783115 master-0 kubenswrapper[7518]: I0313 12:41:54.783053 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-m8mqj_0da84bb7-e936-49a0-96b5-614a1305d6a4/kube-scheduler-operator-container/0.log" Mar 13 12:41:54.843317 master-0 kubenswrapper[7518]: I0313 12:41:54.843237 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-6crtf" podStartSLOduration=1.8432172100000002 podStartE2EDuration="1.84321721s" podCreationTimestamp="2026-03-13 12:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:41:54.84201605 +0000 UTC m=+269.475085237" watchObservedRunningTime="2026-03-13 12:41:54.84321721 +0000 UTC m=+269.476286397" Mar 13 12:41:54.855026 master-0 kubenswrapper[7518]: I0313 12:41:54.854971 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-m8mqj_0da84bb7-e936-49a0-96b5-614a1305d6a4/kube-scheduler-operator-container/1.log" Mar 13 12:41:54.858169 master-0 kubenswrapper[7518]: I0313 12:41:54.858079 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj"] Mar 13 12:41:54.858948 master-0 kubenswrapper[7518]: I0313 12:41:54.858927 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:41:54.860913 master-0 kubenswrapper[7518]: I0313 12:41:54.860870 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-gsftw" Mar 13 12:41:54.861266 master-0 kubenswrapper[7518]: I0313 12:41:54.861243 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 13 12:41:54.861621 master-0 kubenswrapper[7518]: I0313 12:41:54.861599 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 13 12:41:54.861851 master-0 kubenswrapper[7518]: I0313 12:41:54.861820 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 13 12:41:54.900650 master-0 kubenswrapper[7518]: I0313 12:41:54.900604 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj"] Mar 13 12:41:54.905182 master-0 kubenswrapper[7518]: I0313 12:41:54.905128 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-xchrj_089cfabc-9d3d-4260-bb16-8b5eaf73b3fa/openshift-apiserver-operator/0.log" Mar 13 12:41:55.005472 master-0 kubenswrapper[7518]: I0313 12:41:55.005017 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:41:55.005472 master-0 kubenswrapper[7518]: I0313 12:41:55.005110 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:41:55.005472 master-0 kubenswrapper[7518]: I0313 12:41:55.005287 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd4m8\" (UniqueName: \"kubernetes.io/projected/be89c006-0c82-4728-9c79-210303e623dc-kube-api-access-dd4m8\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:41:55.005865 master-0 kubenswrapper[7518]: I0313 12:41:55.005488 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/be89c006-0c82-4728-9c79-210303e623dc-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:41:55.015022 master-0 kubenswrapper[7518]: I0313 12:41:55.014976 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-xchrj_089cfabc-9d3d-4260-bb16-8b5eaf73b3fa/openshift-apiserver-operator/1.log" Mar 13 12:41:55.107039 master-0 kubenswrapper[7518]: I0313 12:41:55.106957 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/be89c006-0c82-4728-9c79-210303e623dc-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:41:55.107335 master-0 kubenswrapper[7518]: I0313 12:41:55.107210 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:41:55.107335 master-0 kubenswrapper[7518]: I0313 12:41:55.107289 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:41:55.107335 master-0 kubenswrapper[7518]: I0313 12:41:55.107328 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd4m8\" (UniqueName: \"kubernetes.io/projected/be89c006-0c82-4728-9c79-210303e623dc-kube-api-access-dd4m8\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:41:55.107984 master-0 kubenswrapper[7518]: I0313 12:41:55.107949 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/be89c006-0c82-4728-9c79-210303e623dc-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:41:55.110673 master-0 kubenswrapper[7518]: I0313 12:41:55.110620 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:41:55.110845 master-0 kubenswrapper[7518]: I0313 12:41:55.110818 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:41:55.125951 master-0 kubenswrapper[7518]: I0313 12:41:55.125894 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd4m8\" (UniqueName: \"kubernetes.io/projected/be89c006-0c82-4728-9c79-210303e623dc-kube-api-access-dd4m8\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:41:55.178406 master-0 kubenswrapper[7518]: I0313 12:41:55.178313 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:41:55.212352 master-0 kubenswrapper[7518]: I0313 12:41:55.211610 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-844bc54c88-vznst_2f48243b-6b05-4efa-8420-58a4419622bf/fix-audit-permissions/0.log" Mar 13 12:41:55.421162 master-0 kubenswrapper[7518]: I0313 12:41:55.420583 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-844bc54c88-vznst_2f48243b-6b05-4efa-8420-58a4419622bf/openshift-apiserver/0.log" Mar 13 12:41:55.541208 master-0 kubenswrapper[7518]: I0313 12:41:55.541160 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:41:55.541208 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:41:55.541208 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:41:55.541208 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:41:55.541528 master-0 kubenswrapper[7518]: I0313 12:41:55.541223 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:41:55.615637 master-0 kubenswrapper[7518]: I0313 12:41:55.615514 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-844bc54c88-vznst_2f48243b-6b05-4efa-8420-58a4419622bf/openshift-apiserver-check-endpoints/0.log" Mar 13 12:41:55.640711 master-0 kubenswrapper[7518]: I0313 12:41:55.640648 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj"] Mar 13 12:41:55.652208 master-0 kubenswrapper[7518]: W0313 12:41:55.651313 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe89c006_0c82_4728_9c79_210303e623dc.slice/crio-327b75ff7d2f2b23c89b69896efc61025e5eb89aca44a3ec0a496ee1ba0617ea WatchSource:0}: Error finding container 327b75ff7d2f2b23c89b69896efc61025e5eb89aca44a3ec0a496ee1ba0617ea: Status 404 returned error can't find the container with id 327b75ff7d2f2b23c89b69896efc61025e5eb89aca44a3ec0a496ee1ba0617ea Mar 13 12:41:55.746801 master-0 kubenswrapper[7518]: I0313 12:41:55.746744 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" event={"ID":"be89c006-0c82-4728-9c79-210303e623dc","Type":"ContainerStarted","Data":"327b75ff7d2f2b23c89b69896efc61025e5eb89aca44a3ec0a496ee1ba0617ea"} Mar 13 12:41:55.814072 master-0 kubenswrapper[7518]: I0313 12:41:55.814029 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-hjzms_15b592d6-3c48-45d4-9172-d28632ae8995/etcd-operator/0.log" Mar 13 12:41:56.014788 master-0 kubenswrapper[7518]: I0313 12:41:56.014614 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-hjzms_15b592d6-3c48-45d4-9172-d28632ae8995/etcd-operator/1.log" Mar 13 12:41:56.213950 master-0 kubenswrapper[7518]: I0313 12:41:56.213901 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-tlnkd_10944f9c-8ce9-44e6-9c36-a0ea19d8cae3/catalog-operator/0.log" Mar 13 12:41:56.416815 master-0 kubenswrapper[7518]: I0313 12:41:56.416777 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-tlnkd_10944f9c-8ce9-44e6-9c36-a0ea19d8cae3/catalog-operator/1.log" Mar 13 12:41:56.541502 master-0 kubenswrapper[7518]: I0313 12:41:56.541410 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:41:56.541502 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:41:56.541502 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:41:56.541502 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:41:56.541502 master-0 kubenswrapper[7518]: I0313 12:41:56.541477 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:41:56.613646 master-0 kubenswrapper[7518]: I0313 12:41:56.613608 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-rfqb9_d5a19b80-d488-46d3-a4a8-0b80361077e1/olm-operator/0.log" Mar 13 12:41:56.816985 master-0 kubenswrapper[7518]: I0313 12:41:56.816928 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-rfqb9_d5a19b80-d488-46d3-a4a8-0b80361077e1/olm-operator/1.log" Mar 13 12:41:57.013468 master-0 kubenswrapper[7518]: I0313 12:41:57.013426 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-669qk_3d653e1a-5903-4a02-9357-df145f028c0d/kube-rbac-proxy/0.log" Mar 13 12:41:57.219491 master-0 kubenswrapper[7518]: I0313 12:41:57.219298 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-669qk_3d653e1a-5903-4a02-9357-df145f028c0d/package-server-manager/0.log" Mar 13 12:41:57.414070 master-0 kubenswrapper[7518]: I0313 12:41:57.414000 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-5c5f6764b5-96ktp_f6992fed-b472-4a2d-a376-c5d72aa846d4/packageserver/0.log" Mar 13 12:41:57.541556 master-0 kubenswrapper[7518]: I0313 12:41:57.541486 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:41:57.541556 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:41:57.541556 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:41:57.541556 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:41:57.541911 master-0 kubenswrapper[7518]: I0313 12:41:57.541574 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:41:57.762783 master-0 kubenswrapper[7518]: I0313 12:41:57.762593 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" event={"ID":"be89c006-0c82-4728-9c79-210303e623dc","Type":"ContainerStarted","Data":"65193e181a97b8629912eea735de8143335689a5d7611a657ef2c73620ad420a"} Mar 13 12:41:57.762783 master-0 kubenswrapper[7518]: I0313 12:41:57.762664 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" event={"ID":"be89c006-0c82-4728-9c79-210303e623dc","Type":"ContainerStarted","Data":"5b388825774fe94baa5ef2d5d44f3f7c4a8a4b87f9ffcf4a8d3f6166adc6eac0"} Mar 13 12:41:57.795811 master-0 kubenswrapper[7518]: I0313 12:41:57.794550 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" podStartSLOduration=2.399677681 podStartE2EDuration="3.794530946s" podCreationTimestamp="2026-03-13 12:41:54 +0000 UTC" firstStartedPulling="2026-03-13 12:41:55.654039697 +0000 UTC m=+270.287108884" lastFinishedPulling="2026-03-13 12:41:57.048892962 +0000 UTC m=+271.681962149" observedRunningTime="2026-03-13 12:41:57.790197526 +0000 UTC m=+272.423266723" watchObservedRunningTime="2026-03-13 12:41:57.794530946 +0000 UTC m=+272.427600143" Mar 13 12:41:58.038657 master-0 kubenswrapper[7518]: I0313 12:41:58.038439 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:41:58.039519 master-0 kubenswrapper[7518]: I0313 12:41:58.039483 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:41:58.072184 master-0 kubenswrapper[7518]: I0313 12:41:58.070837 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:41:58.091122 master-0 kubenswrapper[7518]: I0313 12:41:58.090546 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:41:58.103916 master-0 kubenswrapper[7518]: I0313 12:41:58.103855 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:41:58.123411 master-0 kubenswrapper[7518]: I0313 12:41:58.123378 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:41:58.541896 master-0 kubenswrapper[7518]: I0313 12:41:58.541853 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:41:58.541896 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:41:58.541896 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:41:58.541896 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:41:58.542509 master-0 kubenswrapper[7518]: I0313 12:41:58.542467 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:41:59.238919 master-0 kubenswrapper[7518]: I0313 12:41:59.238888 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:41:59.287702 master-0 kubenswrapper[7518]: I0313 12:41:59.287643 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:41:59.541155 master-0 kubenswrapper[7518]: I0313 12:41:59.539197 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:41:59.549982 master-0 kubenswrapper[7518]: I0313 12:41:59.549331 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:41:59.549982 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:41:59.549982 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:41:59.549982 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:41:59.549982 master-0 kubenswrapper[7518]: I0313 12:41:59.549430 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:41:59.868261 master-0 kubenswrapper[7518]: I0313 12:41:59.868113 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz"] Mar 13 12:41:59.869359 master-0 kubenswrapper[7518]: I0313 12:41:59.869336 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:41:59.872797 master-0 kubenswrapper[7518]: I0313 12:41:59.872762 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-g2ksc" Mar 13 12:41:59.873016 master-0 kubenswrapper[7518]: I0313 12:41:59.872981 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 13 12:41:59.873091 master-0 kubenswrapper[7518]: I0313 12:41:59.873069 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 13 12:41:59.880899 master-0 kubenswrapper[7518]: I0313 12:41:59.880801 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-v4hdh"] Mar 13 12:41:59.881949 master-0 kubenswrapper[7518]: I0313 12:41:59.881919 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:41:59.885558 master-0 kubenswrapper[7518]: I0313 12:41:59.885520 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 13 12:41:59.885558 master-0 kubenswrapper[7518]: I0313 12:41:59.885545 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-7ntw6" Mar 13 12:41:59.885751 master-0 kubenswrapper[7518]: I0313 12:41:59.885559 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 13 12:41:59.910644 master-0 kubenswrapper[7518]: I0313 12:41:59.910597 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz"] Mar 13 12:41:59.922829 master-0 kubenswrapper[7518]: I0313 12:41:59.922771 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm"] Mar 13 12:41:59.924794 master-0 kubenswrapper[7518]: I0313 12:41:59.924744 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:41:59.960699 master-0 kubenswrapper[7518]: I0313 12:41:59.960649 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 13 12:41:59.960699 master-0 kubenswrapper[7518]: I0313 12:41:59.960663 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 13 12:41:59.963473 master-0 kubenswrapper[7518]: I0313 12:41:59.963439 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-zhlhv" Mar 13 12:41:59.963582 master-0 kubenswrapper[7518]: I0313 12:41:59.963514 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 13 12:41:59.977775 master-0 kubenswrapper[7518]: I0313 12:41:59.977725 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm"] Mar 13 12:42:00.063780 master-0 kubenswrapper[7518]: I0313 12:42:00.063734 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:42:00.063971 master-0 kubenswrapper[7518]: I0313 12:42:00.063790 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-textfile\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.063971 master-0 kubenswrapper[7518]: I0313 12:42:00.063821 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1081e565-b7d8-4b6e-9d41-5db36cfe094c-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:42:00.063971 master-0 kubenswrapper[7518]: I0313 12:42:00.063844 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.063971 master-0 kubenswrapper[7518]: I0313 12:42:00.063869 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xbrx\" (UniqueName: \"kubernetes.io/projected/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-api-access-4xbrx\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.063971 master-0 kubenswrapper[7518]: I0313 12:42:00.063892 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v2jm\" (UniqueName: \"kubernetes.io/projected/842251bd-238a-44ba-99fc-a356503f5d16-kube-api-access-9v2jm\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.063971 master-0 kubenswrapper[7518]: I0313 12:42:00.063908 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:42:00.063971 master-0 kubenswrapper[7518]: I0313 12:42:00.063931 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/842251bd-238a-44ba-99fc-a356503f5d16-metrics-client-ca\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.063971 master-0 kubenswrapper[7518]: I0313 12:42:00.063952 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b726x\" (UniqueName: \"kubernetes.io/projected/1081e565-b7d8-4b6e-9d41-5db36cfe094c-kube-api-access-b726x\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:42:00.063971 master-0 kubenswrapper[7518]: I0313 12:42:00.063967 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.064400 master-0 kubenswrapper[7518]: I0313 12:42:00.063986 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.064400 master-0 kubenswrapper[7518]: I0313 12:42:00.064004 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-tls\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.064400 master-0 kubenswrapper[7518]: I0313 12:42:00.064028 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.064400 master-0 kubenswrapper[7518]: I0313 12:42:00.064044 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.064400 master-0 kubenswrapper[7518]: I0313 12:42:00.064081 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-wtmp\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.064400 master-0 kubenswrapper[7518]: I0313 12:42:00.064106 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-root\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.064400 master-0 kubenswrapper[7518]: I0313 12:42:00.064122 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/5e4f10ca-6466-4ac0-aeb7-325e40473e04-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.064400 master-0 kubenswrapper[7518]: I0313 12:42:00.064191 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-sys\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.166045 master-0 kubenswrapper[7518]: I0313 12:42:00.165886 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-sys\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.166270 master-0 kubenswrapper[7518]: I0313 12:42:00.166071 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-sys\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.166270 master-0 kubenswrapper[7518]: I0313 12:42:00.166233 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:42:00.166641 master-0 kubenswrapper[7518]: I0313 12:42:00.166599 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-textfile\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.166703 master-0 kubenswrapper[7518]: I0313 12:42:00.166655 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1081e565-b7d8-4b6e-9d41-5db36cfe094c-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:42:00.166748 master-0 kubenswrapper[7518]: I0313 12:42:00.166700 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.166824 master-0 kubenswrapper[7518]: I0313 12:42:00.166787 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xbrx\" (UniqueName: \"kubernetes.io/projected/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-api-access-4xbrx\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.166984 master-0 kubenswrapper[7518]: E0313 12:42:00.166955 7518 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Mar 13 12:42:00.167100 master-0 kubenswrapper[7518]: E0313 12:42:00.167068 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-tls podName:5e4f10ca-6466-4ac0-aeb7-325e40473e04 nodeName:}" failed. No retries permitted until 2026-03-13 12:42:00.667009064 +0000 UTC m=+275.300078251 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-tls") pod "kube-state-metrics-68b88f8cb5-blvhm" (UID: "5e4f10ca-6466-4ac0-aeb7-325e40473e04") : secret "kube-state-metrics-tls" not found Mar 13 12:42:00.167862 master-0 kubenswrapper[7518]: I0313 12:42:00.167257 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v2jm\" (UniqueName: \"kubernetes.io/projected/842251bd-238a-44ba-99fc-a356503f5d16-kube-api-access-9v2jm\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.168112 master-0 kubenswrapper[7518]: I0313 12:42:00.167923 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:42:00.168112 master-0 kubenswrapper[7518]: I0313 12:42:00.167971 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/842251bd-238a-44ba-99fc-a356503f5d16-metrics-client-ca\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.168112 master-0 kubenswrapper[7518]: I0313 12:42:00.167810 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1081e565-b7d8-4b6e-9d41-5db36cfe094c-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:42:00.168112 master-0 kubenswrapper[7518]: I0313 12:42:00.168004 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b726x\" (UniqueName: \"kubernetes.io/projected/1081e565-b7d8-4b6e-9d41-5db36cfe094c-kube-api-access-b726x\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:42:00.168112 master-0 kubenswrapper[7518]: I0313 12:42:00.168030 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.168112 master-0 kubenswrapper[7518]: I0313 12:42:00.167306 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-textfile\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.168410 master-0 kubenswrapper[7518]: I0313 12:42:00.168307 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.168410 master-0 kubenswrapper[7518]: I0313 12:42:00.168356 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-tls\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.168522 master-0 kubenswrapper[7518]: E0313 12:42:00.168479 7518 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Mar 13 12:42:00.168572 master-0 kubenswrapper[7518]: E0313 12:42:00.168528 7518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-tls podName:842251bd-238a-44ba-99fc-a356503f5d16 nodeName:}" failed. No retries permitted until 2026-03-13 12:42:00.668511782 +0000 UTC m=+275.301580969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-tls") pod "node-exporter-v4hdh" (UID: "842251bd-238a-44ba-99fc-a356503f5d16") : secret "node-exporter-tls" not found Mar 13 12:42:00.168637 master-0 kubenswrapper[7518]: I0313 12:42:00.168613 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.168678 master-0 kubenswrapper[7518]: I0313 12:42:00.168655 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.168727 master-0 kubenswrapper[7518]: I0313 12:42:00.168699 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-wtmp\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.168791 master-0 kubenswrapper[7518]: I0313 12:42:00.168744 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-root\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.168791 master-0 kubenswrapper[7518]: I0313 12:42:00.168781 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/5e4f10ca-6466-4ac0-aeb7-325e40473e04-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.169037 master-0 kubenswrapper[7518]: I0313 12:42:00.169009 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-wtmp\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.169037 master-0 kubenswrapper[7518]: I0313 12:42:00.169007 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/842251bd-238a-44ba-99fc-a356503f5d16-metrics-client-ca\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.169156 master-0 kubenswrapper[7518]: I0313 12:42:00.169056 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-root\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.169748 master-0 kubenswrapper[7518]: I0313 12:42:00.169716 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/5e4f10ca-6466-4ac0-aeb7-325e40473e04-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.170064 master-0 kubenswrapper[7518]: I0313 12:42:00.170024 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.170064 master-0 kubenswrapper[7518]: I0313 12:42:00.170036 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.172338 master-0 kubenswrapper[7518]: I0313 12:42:00.172231 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.172598 master-0 kubenswrapper[7518]: I0313 12:42:00.172541 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.172722 master-0 kubenswrapper[7518]: I0313 12:42:00.172697 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:42:00.183545 master-0 kubenswrapper[7518]: I0313 12:42:00.183491 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:42:00.254525 master-0 kubenswrapper[7518]: I0313 12:42:00.253456 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xbrx\" (UniqueName: \"kubernetes.io/projected/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-api-access-4xbrx\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.262095 master-0 kubenswrapper[7518]: I0313 12:42:00.262044 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v2jm\" (UniqueName: \"kubernetes.io/projected/842251bd-238a-44ba-99fc-a356503f5d16-kube-api-access-9v2jm\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.270049 master-0 kubenswrapper[7518]: I0313 12:42:00.270001 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b726x\" (UniqueName: \"kubernetes.io/projected/1081e565-b7d8-4b6e-9d41-5db36cfe094c-kube-api-access-b726x\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:42:00.483619 master-0 kubenswrapper[7518]: I0313 12:42:00.483469 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:42:00.540895 master-0 kubenswrapper[7518]: I0313 12:42:00.540850 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:00.540895 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:00.540895 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:00.540895 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:00.541169 master-0 kubenswrapper[7518]: I0313 12:42:00.540900 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:00.764913 master-0 kubenswrapper[7518]: I0313 12:42:00.764848 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-tls\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.766012 master-0 kubenswrapper[7518]: I0313 12:42:00.765282 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.768429 master-0 kubenswrapper[7518]: I0313 12:42:00.768382 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-tls\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:00.769311 master-0 kubenswrapper[7518]: I0313 12:42:00.769272 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.975371 master-0 kubenswrapper[7518]: I0313 12:42:00.972127 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:42:00.975371 master-0 kubenswrapper[7518]: I0313 12:42:00.972694 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:42:01.078379 master-0 kubenswrapper[7518]: I0313 12:42:01.075381 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz"] Mar 13 12:42:01.542016 master-0 kubenswrapper[7518]: I0313 12:42:01.541959 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:01.542016 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:01.542016 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:01.542016 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:01.542627 master-0 kubenswrapper[7518]: I0313 12:42:01.542024 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:01.639901 master-0 kubenswrapper[7518]: I0313 12:42:01.639827 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm"] Mar 13 12:42:01.992231 master-0 kubenswrapper[7518]: I0313 12:42:01.992030 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" event={"ID":"1081e565-b7d8-4b6e-9d41-5db36cfe094c","Type":"ContainerStarted","Data":"20328d149349b2cf3f63f90012ee59e52368c8992f6fbf7644320801d151306f"} Mar 13 12:42:01.992231 master-0 kubenswrapper[7518]: I0313 12:42:01.992084 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" event={"ID":"1081e565-b7d8-4b6e-9d41-5db36cfe094c","Type":"ContainerStarted","Data":"852b9b7c6098e3f4e2a825d2f5ddd269fe45c2d2afb81cc522152774bf43dddf"} Mar 13 12:42:01.992231 master-0 kubenswrapper[7518]: I0313 12:42:01.992093 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" event={"ID":"1081e565-b7d8-4b6e-9d41-5db36cfe094c","Type":"ContainerStarted","Data":"aff2f4bdb8410e55f89c70c290b0ee60c11f3e12de8945726a3ee53766f5711f"} Mar 13 12:42:02.000156 master-0 kubenswrapper[7518]: I0313 12:42:01.997715 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" event={"ID":"5e4f10ca-6466-4ac0-aeb7-325e40473e04","Type":"ContainerStarted","Data":"9f0c754e60ef175d41e372a61f68bf008bd4fa86f313ae1ab6dd7da87027e47f"} Mar 13 12:42:02.000156 master-0 kubenswrapper[7518]: I0313 12:42:02.000115 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v4hdh" event={"ID":"842251bd-238a-44ba-99fc-a356503f5d16","Type":"ContainerStarted","Data":"d211f6630b0e510a98b862295b3b4e01e3b8d0f319a2b5a7fbad71f4b348ebd3"} Mar 13 12:42:02.540899 master-0 kubenswrapper[7518]: I0313 12:42:02.540859 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:02.540899 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:02.540899 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:02.540899 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:02.541152 master-0 kubenswrapper[7518]: I0313 12:42:02.540928 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:03.541989 master-0 kubenswrapper[7518]: I0313 12:42:03.541928 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:03.541989 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:03.541989 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:03.541989 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:03.542725 master-0 kubenswrapper[7518]: I0313 12:42:03.542021 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:04.088988 master-0 kubenswrapper[7518]: I0313 12:42:04.088936 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" event={"ID":"5e4f10ca-6466-4ac0-aeb7-325e40473e04","Type":"ContainerStarted","Data":"c0dfce4c711e7c949af1bdf93d077b00c3f743a74723e0d9d5779c35d03ad129"} Mar 13 12:42:04.088988 master-0 kubenswrapper[7518]: I0313 12:42:04.088993 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" event={"ID":"5e4f10ca-6466-4ac0-aeb7-325e40473e04","Type":"ContainerStarted","Data":"7c7532daccf5b1f0a549dcf67c304b69a97155f01b3f70bb4951e0bf4be838bc"} Mar 13 12:42:04.092153 master-0 kubenswrapper[7518]: I0313 12:42:04.092094 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" event={"ID":"1081e565-b7d8-4b6e-9d41-5db36cfe094c","Type":"ContainerStarted","Data":"5b62b8c66924991594f5eba69a24acf5a5f1bb6d85a7748d181360050cf836e5"} Mar 13 12:42:04.093933 master-0 kubenswrapper[7518]: I0313 12:42:04.093897 7518 generic.go:334] "Generic (PLEG): container finished" podID="842251bd-238a-44ba-99fc-a356503f5d16" containerID="255845d3d1399076602401b1b6c6d6b0266b45fda7e7b34498aafae3e13d0822" exitCode=0 Mar 13 12:42:04.094010 master-0 kubenswrapper[7518]: I0313 12:42:04.093943 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v4hdh" event={"ID":"842251bd-238a-44ba-99fc-a356503f5d16","Type":"ContainerDied","Data":"255845d3d1399076602401b1b6c6d6b0266b45fda7e7b34498aafae3e13d0822"} Mar 13 12:42:04.171041 master-0 kubenswrapper[7518]: I0313 12:42:04.170755 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" podStartSLOduration=3.7378963179999998 podStartE2EDuration="5.170727587s" podCreationTimestamp="2026-03-13 12:41:59 +0000 UTC" firstStartedPulling="2026-03-13 12:42:01.521775303 +0000 UTC m=+276.154844490" lastFinishedPulling="2026-03-13 12:42:02.954606572 +0000 UTC m=+277.587675759" observedRunningTime="2026-03-13 12:42:04.130366784 +0000 UTC m=+278.763435991" watchObservedRunningTime="2026-03-13 12:42:04.170727587 +0000 UTC m=+278.803796774" Mar 13 12:42:04.543066 master-0 kubenswrapper[7518]: I0313 12:42:04.542995 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:04.543066 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:04.543066 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:04.543066 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:04.543646 master-0 kubenswrapper[7518]: I0313 12:42:04.543085 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:05.108020 master-0 kubenswrapper[7518]: I0313 12:42:05.107970 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" event={"ID":"5e4f10ca-6466-4ac0-aeb7-325e40473e04","Type":"ContainerStarted","Data":"4919c1dbcaee1bb9db6e6418598cdeb76b2f6b91bb45fba651ec4132e0888a15"} Mar 13 12:42:05.116607 master-0 kubenswrapper[7518]: I0313 12:42:05.116551 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v4hdh" event={"ID":"842251bd-238a-44ba-99fc-a356503f5d16","Type":"ContainerStarted","Data":"02a46f48db04c46b5cf26cac47a8866c3ec99c7e1f84d89a408b1c8bafcb55f0"} Mar 13 12:42:05.116835 master-0 kubenswrapper[7518]: I0313 12:42:05.116618 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v4hdh" event={"ID":"842251bd-238a-44ba-99fc-a356503f5d16","Type":"ContainerStarted","Data":"8e3929e94707b1a9ec0a8fe6cdaf26dbd54d9320e814be8d46ceddbe27ec6603"} Mar 13 12:42:05.137416 master-0 kubenswrapper[7518]: I0313 12:42:05.137319 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" podStartSLOduration=4.150227829 podStartE2EDuration="6.137294271s" podCreationTimestamp="2026-03-13 12:41:59 +0000 UTC" firstStartedPulling="2026-03-13 12:42:01.645897459 +0000 UTC m=+276.278966646" lastFinishedPulling="2026-03-13 12:42:03.632963901 +0000 UTC m=+278.266033088" observedRunningTime="2026-03-13 12:42:05.133871533 +0000 UTC m=+279.766940740" watchObservedRunningTime="2026-03-13 12:42:05.137294271 +0000 UTC m=+279.770363458" Mar 13 12:42:05.229643 master-0 kubenswrapper[7518]: I0313 12:42:05.229468 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-v4hdh" podStartSLOduration=4.3543804139999995 podStartE2EDuration="6.229434495s" podCreationTimestamp="2026-03-13 12:41:59 +0000 UTC" firstStartedPulling="2026-03-13 12:42:01.033660928 +0000 UTC m=+275.666730125" lastFinishedPulling="2026-03-13 12:42:02.908715019 +0000 UTC m=+277.541784206" observedRunningTime="2026-03-13 12:42:05.169956398 +0000 UTC m=+279.803025585" watchObservedRunningTime="2026-03-13 12:42:05.229434495 +0000 UTC m=+279.862503692" Mar 13 12:42:05.723890 master-0 kubenswrapper[7518]: I0313 12:42:05.544036 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:05.723890 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:05.723890 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:05.723890 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:05.723890 master-0 kubenswrapper[7518]: I0313 12:42:05.544128 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:05.843645 master-0 kubenswrapper[7518]: I0313 12:42:05.843603 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-567b9cf7f-cxnj2"] Mar 13 12:42:05.844404 master-0 kubenswrapper[7518]: I0313 12:42:05.844383 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.844756 master-0 kubenswrapper[7518]: I0313 12:42:05.844727 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-client-certs\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.844819 master-0 kubenswrapper[7518]: I0313 12:42:05.844767 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c69h2\" (UniqueName: \"kubernetes.io/projected/fc192c03-5aec-4507-a702-56bf98c96e9c-kube-api-access-c69h2\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.844819 master-0 kubenswrapper[7518]: I0313 12:42:05.844792 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-server-tls\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.844819 master-0 kubenswrapper[7518]: I0313 12:42:05.844816 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-client-ca-bundle\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.844927 master-0 kubenswrapper[7518]: I0313 12:42:05.844840 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-metrics-server-audit-profiles\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.844927 master-0 kubenswrapper[7518]: I0313 12:42:05.844861 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.844927 master-0 kubenswrapper[7518]: I0313 12:42:05.844915 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/fc192c03-5aec-4507-a702-56bf98c96e9c-audit-log\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.846070 master-0 kubenswrapper[7518]: I0313 12:42:05.846049 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 13 12:42:05.849276 master-0 kubenswrapper[7518]: I0313 12:42:05.849237 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-a1r15je3eljsi" Mar 13 12:42:05.849276 master-0 kubenswrapper[7518]: I0313 12:42:05.849255 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 13 12:42:05.849366 master-0 kubenswrapper[7518]: I0313 12:42:05.849326 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 13 12:42:05.849366 master-0 kubenswrapper[7518]: I0313 12:42:05.849319 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 13 12:42:05.849468 master-0 kubenswrapper[7518]: I0313 12:42:05.849355 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-lt6vb" Mar 13 12:42:05.851190 master-0 kubenswrapper[7518]: I0313 12:42:05.851128 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-567b9cf7f-cxnj2"] Mar 13 12:42:05.945854 master-0 kubenswrapper[7518]: I0313 12:42:05.945801 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-client-certs\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.945854 master-0 kubenswrapper[7518]: I0313 12:42:05.945847 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c69h2\" (UniqueName: \"kubernetes.io/projected/fc192c03-5aec-4507-a702-56bf98c96e9c-kube-api-access-c69h2\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.946107 master-0 kubenswrapper[7518]: I0313 12:42:05.945875 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-server-tls\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.946107 master-0 kubenswrapper[7518]: I0313 12:42:05.945894 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-client-ca-bundle\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.946107 master-0 kubenswrapper[7518]: I0313 12:42:05.945924 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-metrics-server-audit-profiles\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.946107 master-0 kubenswrapper[7518]: I0313 12:42:05.945961 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.946107 master-0 kubenswrapper[7518]: I0313 12:42:05.946028 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/fc192c03-5aec-4507-a702-56bf98c96e9c-audit-log\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.947405 master-0 kubenswrapper[7518]: I0313 12:42:05.946473 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/fc192c03-5aec-4507-a702-56bf98c96e9c-audit-log\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.950447 master-0 kubenswrapper[7518]: I0313 12:42:05.950395 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-metrics-server-audit-profiles\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.952464 master-0 kubenswrapper[7518]: I0313 12:42:05.950762 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.952464 master-0 kubenswrapper[7518]: I0313 12:42:05.951718 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-client-certs\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.957159 master-0 kubenswrapper[7518]: I0313 12:42:05.953343 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-server-tls\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.958064 master-0 kubenswrapper[7518]: I0313 12:42:05.957855 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-client-ca-bundle\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:05.974042 master-0 kubenswrapper[7518]: I0313 12:42:05.972506 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c69h2\" (UniqueName: \"kubernetes.io/projected/fc192c03-5aec-4507-a702-56bf98c96e9c-kube-api-access-c69h2\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:06.168095 master-0 kubenswrapper[7518]: I0313 12:42:06.168035 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:06.542409 master-0 kubenswrapper[7518]: I0313 12:42:06.542329 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:06.542409 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:06.542409 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:06.542409 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:06.542847 master-0 kubenswrapper[7518]: I0313 12:42:06.542425 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:07.134838 master-0 kubenswrapper[7518]: I0313 12:42:07.134804 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-567b9cf7f-cxnj2"] Mar 13 12:42:07.541938 master-0 kubenswrapper[7518]: I0313 12:42:07.541865 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:07.541938 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:07.541938 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:07.541938 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:07.541938 master-0 kubenswrapper[7518]: I0313 12:42:07.541931 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:08.132834 master-0 kubenswrapper[7518]: I0313 12:42:08.132776 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" event={"ID":"fc192c03-5aec-4507-a702-56bf98c96e9c","Type":"ContainerStarted","Data":"ca4392c691682c0095dfe8e779e3de1082f741c49a5ae52776e0a4782a168b3b"} Mar 13 12:42:08.583274 master-0 kubenswrapper[7518]: I0313 12:42:08.583227 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:08.583274 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:08.583274 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:08.583274 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:08.583833 master-0 kubenswrapper[7518]: I0313 12:42:08.583288 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:09.542752 master-0 kubenswrapper[7518]: I0313 12:42:09.542708 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:09.542752 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:09.542752 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:09.542752 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:09.543253 master-0 kubenswrapper[7518]: I0313 12:42:09.543227 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:10.542589 master-0 kubenswrapper[7518]: I0313 12:42:10.542489 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:10.542589 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:10.542589 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:10.542589 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:10.542589 master-0 kubenswrapper[7518]: I0313 12:42:10.542581 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:11.541546 master-0 kubenswrapper[7518]: I0313 12:42:11.541490 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:11.541546 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:11.541546 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:11.541546 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:11.541901 master-0 kubenswrapper[7518]: I0313 12:42:11.541570 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:12.167893 master-0 kubenswrapper[7518]: I0313 12:42:12.167823 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" event={"ID":"fc192c03-5aec-4507-a702-56bf98c96e9c","Type":"ContainerStarted","Data":"6446a8dda38eb9740b431e3cbbce0e66637311ae9d8e6bde203aefb67d8183fd"} Mar 13 12:42:12.280515 master-0 kubenswrapper[7518]: I0313 12:42:12.280434 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" podStartSLOduration=3.093055228 podStartE2EDuration="7.280411995s" podCreationTimestamp="2026-03-13 12:42:05 +0000 UTC" firstStartedPulling="2026-03-13 12:42:07.140876301 +0000 UTC m=+281.773945488" lastFinishedPulling="2026-03-13 12:42:11.328233068 +0000 UTC m=+285.961302255" observedRunningTime="2026-03-13 12:42:12.278408525 +0000 UTC m=+286.911477712" watchObservedRunningTime="2026-03-13 12:42:12.280411995 +0000 UTC m=+286.913481182" Mar 13 12:42:12.542168 master-0 kubenswrapper[7518]: I0313 12:42:12.542098 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:12.542168 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:12.542168 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:12.542168 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:12.542458 master-0 kubenswrapper[7518]: I0313 12:42:12.542187 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:13.544502 master-0 kubenswrapper[7518]: I0313 12:42:13.544436 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:13.544502 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:13.544502 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:13.544502 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:13.545607 master-0 kubenswrapper[7518]: I0313 12:42:13.544531 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:14.542083 master-0 kubenswrapper[7518]: I0313 12:42:14.542016 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:14.542083 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:14.542083 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:14.542083 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:14.542470 master-0 kubenswrapper[7518]: I0313 12:42:14.542109 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:15.562504 master-0 kubenswrapper[7518]: I0313 12:42:15.562404 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:15.562504 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:15.562504 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:15.562504 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:15.562504 master-0 kubenswrapper[7518]: I0313 12:42:15.562498 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:16.542601 master-0 kubenswrapper[7518]: I0313 12:42:16.542532 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:16.542601 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:16.542601 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:16.542601 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:16.543120 master-0 kubenswrapper[7518]: I0313 12:42:16.542625 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:17.542153 master-0 kubenswrapper[7518]: I0313 12:42:17.542066 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:17.542153 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:17.542153 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:17.542153 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:17.542153 master-0 kubenswrapper[7518]: I0313 12:42:17.542147 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:18.543999 master-0 kubenswrapper[7518]: I0313 12:42:18.543768 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:18.543999 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:18.543999 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:18.543999 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:18.543999 master-0 kubenswrapper[7518]: I0313 12:42:18.543826 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:19.543122 master-0 kubenswrapper[7518]: I0313 12:42:19.543041 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:19.543122 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:19.543122 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:19.543122 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:19.543504 master-0 kubenswrapper[7518]: I0313 12:42:19.543164 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:20.542738 master-0 kubenswrapper[7518]: I0313 12:42:20.542671 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:20.542738 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:20.542738 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:20.542738 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:20.543605 master-0 kubenswrapper[7518]: I0313 12:42:20.543504 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:21.542923 master-0 kubenswrapper[7518]: I0313 12:42:21.542832 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:21.542923 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:21.542923 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:21.542923 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:21.543497 master-0 kubenswrapper[7518]: I0313 12:42:21.542951 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:22.541615 master-0 kubenswrapper[7518]: I0313 12:42:22.541548 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:22.541615 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:22.541615 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:22.541615 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:22.541615 master-0 kubenswrapper[7518]: I0313 12:42:22.541610 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:23.543329 master-0 kubenswrapper[7518]: I0313 12:42:23.543271 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:23.543329 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:23.543329 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:23.543329 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:23.543329 master-0 kubenswrapper[7518]: I0313 12:42:23.543331 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:24.542550 master-0 kubenswrapper[7518]: I0313 12:42:24.542498 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:24.542550 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:24.542550 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:24.542550 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:24.542877 master-0 kubenswrapper[7518]: I0313 12:42:24.542572 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:25.541574 master-0 kubenswrapper[7518]: I0313 12:42:25.541512 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:25.541574 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:25.541574 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:25.541574 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:25.542335 master-0 kubenswrapper[7518]: I0313 12:42:25.541577 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:25.685351 master-0 kubenswrapper[7518]: I0313 12:42:25.685292 7518 scope.go:117] "RemoveContainer" containerID="3c4695e1552ba9205d33b8d7524c5a76469234a9b454c27b01c396a95436c2b9" Mar 13 12:42:26.168952 master-0 kubenswrapper[7518]: I0313 12:42:26.168812 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:26.168952 master-0 kubenswrapper[7518]: I0313 12:42:26.168899 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:26.542390 master-0 kubenswrapper[7518]: I0313 12:42:26.542344 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:26.542390 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:26.542390 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:26.542390 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:26.543456 master-0 kubenswrapper[7518]: I0313 12:42:26.543182 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:27.542254 master-0 kubenswrapper[7518]: I0313 12:42:27.542187 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:27.542254 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:27.542254 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:27.542254 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:27.542936 master-0 kubenswrapper[7518]: I0313 12:42:27.542269 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:28.543064 master-0 kubenswrapper[7518]: I0313 12:42:28.542968 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:28.543064 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:28.543064 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:28.543064 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:28.543064 master-0 kubenswrapper[7518]: I0313 12:42:28.543040 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:29.283226 master-0 kubenswrapper[7518]: I0313 12:42:29.283121 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/1.log" Mar 13 12:42:29.284476 master-0 kubenswrapper[7518]: I0313 12:42:29.284417 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/0.log" Mar 13 12:42:29.284712 master-0 kubenswrapper[7518]: I0313 12:42:29.284484 7518 generic.go:334] "Generic (PLEG): container finished" podID="2f79578c-bbfb-4968-893a-730deb4c01f9" containerID="c36bf45a4804fa4ca98a882a198414395cc18ce172e9fe0b2eeeacf2ec4ae9ef" exitCode=1 Mar 13 12:42:29.284712 master-0 kubenswrapper[7518]: I0313 12:42:29.284533 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" event={"ID":"2f79578c-bbfb-4968-893a-730deb4c01f9","Type":"ContainerDied","Data":"c36bf45a4804fa4ca98a882a198414395cc18ce172e9fe0b2eeeacf2ec4ae9ef"} Mar 13 12:42:29.284712 master-0 kubenswrapper[7518]: I0313 12:42:29.284619 7518 scope.go:117] "RemoveContainer" containerID="6a8c75c694096fc8dedc129901064fbff36d84f9daf7b91e5a68c2b191c60f00" Mar 13 12:42:29.285282 master-0 kubenswrapper[7518]: I0313 12:42:29.285238 7518 scope.go:117] "RemoveContainer" containerID="c36bf45a4804fa4ca98a882a198414395cc18ce172e9fe0b2eeeacf2ec4ae9ef" Mar 13 12:42:29.285561 master-0 kubenswrapper[7518]: E0313 12:42:29.285519 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-ckl2j_openshift-ingress-operator(2f79578c-bbfb-4968-893a-730deb4c01f9)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" podUID="2f79578c-bbfb-4968-893a-730deb4c01f9" Mar 13 12:42:29.542848 master-0 kubenswrapper[7518]: I0313 12:42:29.542728 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:29.542848 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:29.542848 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:29.542848 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:29.544232 master-0 kubenswrapper[7518]: I0313 12:42:29.542863 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:30.291920 master-0 kubenswrapper[7518]: I0313 12:42:30.291848 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/1.log" Mar 13 12:42:30.545078 master-0 kubenswrapper[7518]: I0313 12:42:30.544905 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:30.545078 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:30.545078 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:30.545078 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:30.545078 master-0 kubenswrapper[7518]: I0313 12:42:30.545003 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:31.541906 master-0 kubenswrapper[7518]: I0313 12:42:31.541827 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:31.541906 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:31.541906 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:31.541906 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:31.542322 master-0 kubenswrapper[7518]: I0313 12:42:31.541935 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:32.541638 master-0 kubenswrapper[7518]: I0313 12:42:32.541579 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:32.541638 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:32.541638 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:32.541638 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:32.542276 master-0 kubenswrapper[7518]: I0313 12:42:32.541669 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:33.542276 master-0 kubenswrapper[7518]: I0313 12:42:33.542206 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:33.542276 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:33.542276 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:33.542276 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:33.543551 master-0 kubenswrapper[7518]: I0313 12:42:33.543478 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:34.542261 master-0 kubenswrapper[7518]: I0313 12:42:34.542114 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:34.542261 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:34.542261 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:34.542261 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:34.542261 master-0 kubenswrapper[7518]: I0313 12:42:34.542276 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:35.541877 master-0 kubenswrapper[7518]: I0313 12:42:35.541768 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:35.541877 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:35.541877 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:35.541877 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:35.542781 master-0 kubenswrapper[7518]: I0313 12:42:35.541909 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:36.541267 master-0 kubenswrapper[7518]: I0313 12:42:36.541191 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:36.541267 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:36.541267 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:36.541267 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:36.541267 master-0 kubenswrapper[7518]: I0313 12:42:36.541256 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:37.541866 master-0 kubenswrapper[7518]: I0313 12:42:37.541793 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:37.541866 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:37.541866 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:37.541866 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:37.541866 master-0 kubenswrapper[7518]: I0313 12:42:37.541854 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:38.542310 master-0 kubenswrapper[7518]: I0313 12:42:38.542265 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:38.542310 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:38.542310 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:38.542310 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:38.543326 master-0 kubenswrapper[7518]: I0313 12:42:38.542991 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:39.542362 master-0 kubenswrapper[7518]: I0313 12:42:39.542267 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:39.542362 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:39.542362 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:39.542362 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:39.542996 master-0 kubenswrapper[7518]: I0313 12:42:39.542391 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:40.542318 master-0 kubenswrapper[7518]: I0313 12:42:40.542231 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:40.542318 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:40.542318 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:40.542318 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:40.542960 master-0 kubenswrapper[7518]: I0313 12:42:40.542318 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:40.669318 master-0 kubenswrapper[7518]: I0313 12:42:40.669271 7518 scope.go:117] "RemoveContainer" containerID="c36bf45a4804fa4ca98a882a198414395cc18ce172e9fe0b2eeeacf2ec4ae9ef" Mar 13 12:42:41.364385 master-0 kubenswrapper[7518]: I0313 12:42:41.364340 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/1.log" Mar 13 12:42:41.364747 master-0 kubenswrapper[7518]: I0313 12:42:41.364701 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" event={"ID":"2f79578c-bbfb-4968-893a-730deb4c01f9","Type":"ContainerStarted","Data":"4045dec19d514a7cdc11bc9584aece668967f43e77e3659c49eadc29454d9d85"} Mar 13 12:42:41.541873 master-0 kubenswrapper[7518]: I0313 12:42:41.541826 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:41.541873 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:41.541873 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:41.541873 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:41.542190 master-0 kubenswrapper[7518]: I0313 12:42:41.541895 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:42.541506 master-0 kubenswrapper[7518]: I0313 12:42:42.541444 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:42.541506 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:42.541506 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:42.541506 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:42.542091 master-0 kubenswrapper[7518]: I0313 12:42:42.541525 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:43.541090 master-0 kubenswrapper[7518]: I0313 12:42:43.540992 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:43.541090 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:43.541090 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:43.541090 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:43.542428 master-0 kubenswrapper[7518]: I0313 12:42:43.541100 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:44.542341 master-0 kubenswrapper[7518]: I0313 12:42:44.542211 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:44.542341 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:44.542341 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:44.542341 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:44.542341 master-0 kubenswrapper[7518]: I0313 12:42:44.542317 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:45.541318 master-0 kubenswrapper[7518]: I0313 12:42:45.541226 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:45.541318 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:45.541318 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:45.541318 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:45.541318 master-0 kubenswrapper[7518]: I0313 12:42:45.541306 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:46.180247 master-0 kubenswrapper[7518]: I0313 12:42:46.180102 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:46.186170 master-0 kubenswrapper[7518]: I0313 12:42:46.186109 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:42:46.541896 master-0 kubenswrapper[7518]: I0313 12:42:46.541824 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:46.541896 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:46.541896 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:46.541896 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:46.542324 master-0 kubenswrapper[7518]: I0313 12:42:46.541924 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:47.541203 master-0 kubenswrapper[7518]: I0313 12:42:47.541163 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:47.541203 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:47.541203 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:47.541203 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:47.541751 master-0 kubenswrapper[7518]: I0313 12:42:47.541227 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:48.548647 master-0 kubenswrapper[7518]: I0313 12:42:48.548417 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:48.548647 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:48.548647 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:48.548647 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:48.548647 master-0 kubenswrapper[7518]: I0313 12:42:48.548485 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:49.542204 master-0 kubenswrapper[7518]: I0313 12:42:49.542095 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:49.542204 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:49.542204 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:49.542204 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:49.543004 master-0 kubenswrapper[7518]: I0313 12:42:49.542235 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:50.542336 master-0 kubenswrapper[7518]: I0313 12:42:50.542271 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:50.542336 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:50.542336 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:50.542336 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:50.542897 master-0 kubenswrapper[7518]: I0313 12:42:50.542345 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:51.541568 master-0 kubenswrapper[7518]: I0313 12:42:51.541502 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:51.541568 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:51.541568 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:51.541568 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:51.541876 master-0 kubenswrapper[7518]: I0313 12:42:51.541571 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:52.544081 master-0 kubenswrapper[7518]: I0313 12:42:52.543989 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:52.544081 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:52.544081 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:52.544081 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:52.544906 master-0 kubenswrapper[7518]: I0313 12:42:52.544115 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:53.542915 master-0 kubenswrapper[7518]: I0313 12:42:53.542855 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:53.542915 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:53.542915 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:53.542915 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:53.543525 master-0 kubenswrapper[7518]: I0313 12:42:53.543483 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:54.542030 master-0 kubenswrapper[7518]: I0313 12:42:54.541944 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:54.542030 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:54.542030 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:54.542030 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:54.542790 master-0 kubenswrapper[7518]: I0313 12:42:54.542036 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:55.542076 master-0 kubenswrapper[7518]: I0313 12:42:55.542002 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:55.542076 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:55.542076 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:55.542076 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:55.543580 master-0 kubenswrapper[7518]: I0313 12:42:55.542094 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:56.542316 master-0 kubenswrapper[7518]: I0313 12:42:56.542241 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:56.542316 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:56.542316 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:56.542316 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:56.544301 master-0 kubenswrapper[7518]: I0313 12:42:56.542328 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:57.541060 master-0 kubenswrapper[7518]: I0313 12:42:57.540996 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:57.541060 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:57.541060 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:57.541060 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:57.541395 master-0 kubenswrapper[7518]: I0313 12:42:57.541063 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:58.543312 master-0 kubenswrapper[7518]: I0313 12:42:58.542404 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:58.543312 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:58.543312 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:58.543312 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:58.543312 master-0 kubenswrapper[7518]: I0313 12:42:58.542780 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:42:59.542012 master-0 kubenswrapper[7518]: I0313 12:42:59.541947 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:42:59.542012 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:42:59.542012 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:42:59.542012 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:42:59.542406 master-0 kubenswrapper[7518]: I0313 12:42:59.542015 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:00.541991 master-0 kubenswrapper[7518]: I0313 12:43:00.541913 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:00.541991 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:00.541991 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:00.541991 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:00.542778 master-0 kubenswrapper[7518]: I0313 12:43:00.541997 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:01.541654 master-0 kubenswrapper[7518]: I0313 12:43:01.541574 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:01.541654 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:01.541654 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:01.541654 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:01.542493 master-0 kubenswrapper[7518]: I0313 12:43:01.541660 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:02.541976 master-0 kubenswrapper[7518]: I0313 12:43:02.541876 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:02.541976 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:02.541976 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:02.541976 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:02.542789 master-0 kubenswrapper[7518]: I0313 12:43:02.541979 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:03.542974 master-0 kubenswrapper[7518]: I0313 12:43:03.542858 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:03.542974 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:03.542974 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:03.542974 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:03.544220 master-0 kubenswrapper[7518]: I0313 12:43:03.542977 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:04.542570 master-0 kubenswrapper[7518]: I0313 12:43:04.542480 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:04.542570 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:04.542570 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:04.542570 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:04.542933 master-0 kubenswrapper[7518]: I0313 12:43:04.542597 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:05.548102 master-0 kubenswrapper[7518]: I0313 12:43:05.547985 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:05.548102 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:05.548102 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:05.548102 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:05.550664 master-0 kubenswrapper[7518]: I0313 12:43:05.550534 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:06.541722 master-0 kubenswrapper[7518]: I0313 12:43:06.541664 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:06.541722 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:06.541722 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:06.541722 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:06.542010 master-0 kubenswrapper[7518]: I0313 12:43:06.541734 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:07.541958 master-0 kubenswrapper[7518]: I0313 12:43:07.541895 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:07.541958 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:07.541958 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:07.541958 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:07.542668 master-0 kubenswrapper[7518]: I0313 12:43:07.541974 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:08.542032 master-0 kubenswrapper[7518]: I0313 12:43:08.541917 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:08.542032 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:08.542032 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:08.542032 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:08.542032 master-0 kubenswrapper[7518]: I0313 12:43:08.542023 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:09.542567 master-0 kubenswrapper[7518]: I0313 12:43:09.542476 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:09.542567 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:09.542567 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:09.542567 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:09.543247 master-0 kubenswrapper[7518]: I0313 12:43:09.542565 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:10.542150 master-0 kubenswrapper[7518]: I0313 12:43:10.542049 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:10.542150 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:10.542150 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:10.542150 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:10.542416 master-0 kubenswrapper[7518]: I0313 12:43:10.542168 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:11.542619 master-0 kubenswrapper[7518]: I0313 12:43:11.542552 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:11.542619 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:11.542619 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:11.542619 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:11.542619 master-0 kubenswrapper[7518]: I0313 12:43:11.542628 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:12.543329 master-0 kubenswrapper[7518]: I0313 12:43:12.543248 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:12.543329 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:12.543329 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:12.543329 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:12.543896 master-0 kubenswrapper[7518]: I0313 12:43:12.543348 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:13.542212 master-0 kubenswrapper[7518]: I0313 12:43:13.542111 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:13.542212 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:13.542212 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:13.542212 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:13.542754 master-0 kubenswrapper[7518]: I0313 12:43:13.542249 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:14.542167 master-0 kubenswrapper[7518]: I0313 12:43:14.542048 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:14.542167 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:14.542167 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:14.542167 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:14.543453 master-0 kubenswrapper[7518]: I0313 12:43:14.542184 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:15.542440 master-0 kubenswrapper[7518]: I0313 12:43:15.542350 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:15.542440 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:15.542440 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:15.542440 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:15.542440 master-0 kubenswrapper[7518]: I0313 12:43:15.542429 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:16.542303 master-0 kubenswrapper[7518]: I0313 12:43:16.542201 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:16.542303 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:16.542303 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:16.542303 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:16.543528 master-0 kubenswrapper[7518]: I0313 12:43:16.542318 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:17.541948 master-0 kubenswrapper[7518]: I0313 12:43:17.541897 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:17.541948 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:17.541948 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:17.541948 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:17.542553 master-0 kubenswrapper[7518]: I0313 12:43:17.542494 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:18.542234 master-0 kubenswrapper[7518]: I0313 12:43:18.542118 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:18.542234 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:18.542234 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:18.542234 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:18.543162 master-0 kubenswrapper[7518]: I0313 12:43:18.542258 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:19.543619 master-0 kubenswrapper[7518]: I0313 12:43:19.543229 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:19.543619 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:19.543619 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:19.543619 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:19.543619 master-0 kubenswrapper[7518]: I0313 12:43:19.543340 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:20.542400 master-0 kubenswrapper[7518]: I0313 12:43:20.542304 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:20.542400 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:20.542400 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:20.542400 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:20.542730 master-0 kubenswrapper[7518]: I0313 12:43:20.542411 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:21.541959 master-0 kubenswrapper[7518]: I0313 12:43:21.541885 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:21.541959 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:21.541959 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:21.541959 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:21.542680 master-0 kubenswrapper[7518]: I0313 12:43:21.541984 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:22.542266 master-0 kubenswrapper[7518]: I0313 12:43:22.542198 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:22.542266 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:22.542266 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:22.542266 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:22.542266 master-0 kubenswrapper[7518]: I0313 12:43:22.542270 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:23.543086 master-0 kubenswrapper[7518]: I0313 12:43:23.542994 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:23.543086 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:23.543086 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:23.543086 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:23.544760 master-0 kubenswrapper[7518]: I0313 12:43:23.543124 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:24.541006 master-0 kubenswrapper[7518]: I0313 12:43:24.540950 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:24.541006 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:24.541006 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:24.541006 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:24.541300 master-0 kubenswrapper[7518]: I0313 12:43:24.541018 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:25.542029 master-0 kubenswrapper[7518]: I0313 12:43:25.541962 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:25.542029 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:25.542029 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:25.542029 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:25.543370 master-0 kubenswrapper[7518]: I0313 12:43:25.542046 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:26.541958 master-0 kubenswrapper[7518]: I0313 12:43:26.541899 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:26.541958 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:26.541958 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:26.541958 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:26.542575 master-0 kubenswrapper[7518]: I0313 12:43:26.541972 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:27.541767 master-0 kubenswrapper[7518]: I0313 12:43:27.541687 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:27.541767 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:27.541767 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:27.541767 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:27.541767 master-0 kubenswrapper[7518]: I0313 12:43:27.541762 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:28.542357 master-0 kubenswrapper[7518]: I0313 12:43:28.542273 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:28.542357 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:28.542357 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:28.542357 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:28.543009 master-0 kubenswrapper[7518]: I0313 12:43:28.542372 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:28.747730 master-0 kubenswrapper[7518]: I0313 12:43:28.747663 7518 patch_prober.go:28] interesting pod/machine-config-daemon-5h8rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:43:28.747924 master-0 kubenswrapper[7518]: I0313 12:43:28.747728 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" podUID="50be3c2b-284b-4f60-b4ed-2cc7b4e528fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:43:29.541628 master-0 kubenswrapper[7518]: I0313 12:43:29.541551 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:29.541628 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:29.541628 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:29.541628 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:29.541910 master-0 kubenswrapper[7518]: I0313 12:43:29.541641 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:30.547449 master-0 kubenswrapper[7518]: I0313 12:43:30.547275 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:30.547449 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:30.547449 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:30.547449 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:30.547449 master-0 kubenswrapper[7518]: I0313 12:43:30.547404 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:31.542302 master-0 kubenswrapper[7518]: I0313 12:43:31.542116 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:31.542302 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:31.542302 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:31.542302 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:31.542880 master-0 kubenswrapper[7518]: I0313 12:43:31.542302 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:32.542427 master-0 kubenswrapper[7518]: I0313 12:43:32.542324 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:32.542427 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:32.542427 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:32.542427 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:32.542947 master-0 kubenswrapper[7518]: I0313 12:43:32.542451 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:33.542660 master-0 kubenswrapper[7518]: I0313 12:43:33.542551 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:33.542660 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:33.542660 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:33.542660 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:33.543767 master-0 kubenswrapper[7518]: I0313 12:43:33.542751 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:34.541797 master-0 kubenswrapper[7518]: I0313 12:43:34.541722 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:34.541797 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:34.541797 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:34.541797 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:34.541797 master-0 kubenswrapper[7518]: I0313 12:43:34.541796 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:35.542459 master-0 kubenswrapper[7518]: I0313 12:43:35.542343 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:35.542459 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:35.542459 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:35.542459 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:35.542459 master-0 kubenswrapper[7518]: I0313 12:43:35.542430 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:36.542461 master-0 kubenswrapper[7518]: I0313 12:43:36.542322 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:36.542461 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:36.542461 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:36.542461 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:36.543799 master-0 kubenswrapper[7518]: I0313 12:43:36.542466 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:37.541576 master-0 kubenswrapper[7518]: I0313 12:43:37.541514 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:37.541576 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:37.541576 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:37.541576 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:37.541576 master-0 kubenswrapper[7518]: I0313 12:43:37.541578 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:38.542197 master-0 kubenswrapper[7518]: I0313 12:43:38.542110 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:38.542197 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:38.542197 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:38.542197 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:38.542861 master-0 kubenswrapper[7518]: I0313 12:43:38.542199 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:39.541381 master-0 kubenswrapper[7518]: I0313 12:43:39.541319 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:39.541381 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:39.541381 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:39.541381 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:39.541741 master-0 kubenswrapper[7518]: I0313 12:43:39.541396 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:40.541517 master-0 kubenswrapper[7518]: I0313 12:43:40.541408 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:40.541517 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:40.541517 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:40.541517 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:40.541517 master-0 kubenswrapper[7518]: I0313 12:43:40.541477 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:41.542576 master-0 kubenswrapper[7518]: I0313 12:43:41.542512 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:41.542576 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:41.542576 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:41.542576 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:41.543314 master-0 kubenswrapper[7518]: I0313 12:43:41.542596 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:42.541228 master-0 kubenswrapper[7518]: I0313 12:43:42.541162 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:42.541228 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:42.541228 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:42.541228 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:42.541524 master-0 kubenswrapper[7518]: I0313 12:43:42.541235 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:43.542380 master-0 kubenswrapper[7518]: I0313 12:43:43.542310 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:43.542380 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:43.542380 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:43.542380 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:43.543406 master-0 kubenswrapper[7518]: I0313 12:43:43.542387 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:44.541584 master-0 kubenswrapper[7518]: I0313 12:43:44.541518 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:44.541584 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:44.541584 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:44.541584 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:44.541888 master-0 kubenswrapper[7518]: I0313 12:43:44.541589 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:45.542130 master-0 kubenswrapper[7518]: I0313 12:43:45.542050 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:45.542130 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:45.542130 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:45.542130 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:45.542130 master-0 kubenswrapper[7518]: I0313 12:43:45.542108 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:46.543005 master-0 kubenswrapper[7518]: I0313 12:43:46.542921 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:46.543005 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:46.543005 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:46.543005 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:46.543621 master-0 kubenswrapper[7518]: I0313 12:43:46.543013 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:47.541280 master-0 kubenswrapper[7518]: I0313 12:43:47.541223 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:47.541280 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:47.541280 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:47.541280 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:47.541783 master-0 kubenswrapper[7518]: I0313 12:43:47.541748 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:48.541946 master-0 kubenswrapper[7518]: I0313 12:43:48.541876 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:48.541946 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:48.541946 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:48.541946 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:48.542677 master-0 kubenswrapper[7518]: I0313 12:43:48.541951 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:49.544244 master-0 kubenswrapper[7518]: I0313 12:43:49.544184 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:49.544244 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:49.544244 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:49.544244 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:49.545059 master-0 kubenswrapper[7518]: I0313 12:43:49.544254 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:50.541923 master-0 kubenswrapper[7518]: I0313 12:43:50.541844 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:50.541923 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:50.541923 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:50.541923 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:50.542270 master-0 kubenswrapper[7518]: I0313 12:43:50.541920 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:51.542690 master-0 kubenswrapper[7518]: I0313 12:43:51.542623 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:51.542690 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:51.542690 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:51.542690 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:51.543957 master-0 kubenswrapper[7518]: I0313 12:43:51.542694 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:52.542991 master-0 kubenswrapper[7518]: I0313 12:43:52.542860 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:52.542991 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:52.542991 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:52.542991 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:52.542991 master-0 kubenswrapper[7518]: I0313 12:43:52.542963 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:53.542660 master-0 kubenswrapper[7518]: I0313 12:43:53.542620 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:43:53.542660 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:43:53.542660 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:43:53.542660 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:43:53.543101 master-0 kubenswrapper[7518]: I0313 12:43:53.543073 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:43:53.543603 master-0 kubenswrapper[7518]: I0313 12:43:53.543582 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:43:53.544396 master-0 kubenswrapper[7518]: I0313 12:43:53.544370 7518 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"825d71b79346e6c336f0a44e80a86fbf2296a449b4aa734881eff9c8477a662b"} pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" containerMessage="Container router failed startup probe, will be restarted" Mar 13 12:43:53.544553 master-0 kubenswrapper[7518]: I0313 12:43:53.544530 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" containerID="cri-o://825d71b79346e6c336f0a44e80a86fbf2296a449b4aa734881eff9c8477a662b" gracePeriod=3600 Mar 13 12:44:40.321556 master-0 kubenswrapper[7518]: I0313 12:44:40.321500 7518 generic.go:334] "Generic (PLEG): container finished" podID="45925a5e-41ae-4c19-b586-3151c7677612" containerID="825d71b79346e6c336f0a44e80a86fbf2296a449b4aa734881eff9c8477a662b" exitCode=0 Mar 13 12:44:40.322255 master-0 kubenswrapper[7518]: I0313 12:44:40.321570 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" event={"ID":"45925a5e-41ae-4c19-b586-3151c7677612","Type":"ContainerDied","Data":"825d71b79346e6c336f0a44e80a86fbf2296a449b4aa734881eff9c8477a662b"} Mar 13 12:44:40.322255 master-0 kubenswrapper[7518]: I0313 12:44:40.321609 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" event={"ID":"45925a5e-41ae-4c19-b586-3151c7677612","Type":"ContainerStarted","Data":"1033e2108ac67b4d3f75cb158efc6594f949bbad75576abf1a2d8dbd850e968d"} Mar 13 12:44:40.539199 master-0 kubenswrapper[7518]: I0313 12:44:40.539084 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:44:40.542604 master-0 kubenswrapper[7518]: I0313 12:44:40.542558 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:40.542604 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:40.542604 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:40.542604 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:40.542880 master-0 kubenswrapper[7518]: I0313 12:44:40.542638 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:41.541863 master-0 kubenswrapper[7518]: I0313 12:44:41.541763 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:41.541863 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:41.541863 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:41.541863 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:41.541863 master-0 kubenswrapper[7518]: I0313 12:44:41.541856 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:42.334894 master-0 kubenswrapper[7518]: I0313 12:44:42.334838 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/2.log" Mar 13 12:44:42.335833 master-0 kubenswrapper[7518]: I0313 12:44:42.335769 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/1.log" Mar 13 12:44:42.336447 master-0 kubenswrapper[7518]: I0313 12:44:42.336381 7518 generic.go:334] "Generic (PLEG): container finished" podID="2f79578c-bbfb-4968-893a-730deb4c01f9" containerID="4045dec19d514a7cdc11bc9584aece668967f43e77e3659c49eadc29454d9d85" exitCode=1 Mar 13 12:44:42.337531 master-0 kubenswrapper[7518]: I0313 12:44:42.337472 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" event={"ID":"2f79578c-bbfb-4968-893a-730deb4c01f9","Type":"ContainerDied","Data":"4045dec19d514a7cdc11bc9584aece668967f43e77e3659c49eadc29454d9d85"} Mar 13 12:44:42.337784 master-0 kubenswrapper[7518]: I0313 12:44:42.337713 7518 scope.go:117] "RemoveContainer" containerID="c36bf45a4804fa4ca98a882a198414395cc18ce172e9fe0b2eeeacf2ec4ae9ef" Mar 13 12:44:42.354323 master-0 kubenswrapper[7518]: I0313 12:44:42.354178 7518 scope.go:117] "RemoveContainer" containerID="4045dec19d514a7cdc11bc9584aece668967f43e77e3659c49eadc29454d9d85" Mar 13 12:44:42.354762 master-0 kubenswrapper[7518]: E0313 12:44:42.354705 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-ckl2j_openshift-ingress-operator(2f79578c-bbfb-4968-893a-730deb4c01f9)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" podUID="2f79578c-bbfb-4968-893a-730deb4c01f9" Mar 13 12:44:42.541900 master-0 kubenswrapper[7518]: I0313 12:44:42.541827 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:42.541900 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:42.541900 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:42.541900 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:42.541900 master-0 kubenswrapper[7518]: I0313 12:44:42.541889 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:43.346411 master-0 kubenswrapper[7518]: I0313 12:44:43.346338 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/2.log" Mar 13 12:44:43.541971 master-0 kubenswrapper[7518]: I0313 12:44:43.541874 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:43.541971 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:43.541971 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:43.541971 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:43.542677 master-0 kubenswrapper[7518]: I0313 12:44:43.541982 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:44.542441 master-0 kubenswrapper[7518]: I0313 12:44:44.542371 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:44.542441 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:44.542441 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:44.542441 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:44.543237 master-0 kubenswrapper[7518]: I0313 12:44:44.542471 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:45.542082 master-0 kubenswrapper[7518]: I0313 12:44:45.542005 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:45.542082 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:45.542082 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:45.542082 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:45.542082 master-0 kubenswrapper[7518]: I0313 12:44:45.542080 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:46.542630 master-0 kubenswrapper[7518]: I0313 12:44:46.542520 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:46.542630 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:46.542630 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:46.542630 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:46.542630 master-0 kubenswrapper[7518]: I0313 12:44:46.542625 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:47.542115 master-0 kubenswrapper[7518]: I0313 12:44:47.542055 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:47.542115 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:47.542115 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:47.542115 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:47.542620 master-0 kubenswrapper[7518]: I0313 12:44:47.542158 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:48.550605 master-0 kubenswrapper[7518]: I0313 12:44:48.549474 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:48.550605 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:48.550605 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:48.550605 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:48.550605 master-0 kubenswrapper[7518]: I0313 12:44:48.549589 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:49.539721 master-0 kubenswrapper[7518]: I0313 12:44:49.539647 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:44:49.542306 master-0 kubenswrapper[7518]: I0313 12:44:49.542266 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:49.542306 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:49.542306 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:49.542306 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:49.542590 master-0 kubenswrapper[7518]: I0313 12:44:49.542341 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:50.545230 master-0 kubenswrapper[7518]: I0313 12:44:50.544523 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:50.545230 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:50.545230 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:50.545230 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:50.545826 master-0 kubenswrapper[7518]: I0313 12:44:50.545320 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:51.542702 master-0 kubenswrapper[7518]: I0313 12:44:51.542367 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:51.542702 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:51.542702 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:51.542702 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:51.542702 master-0 kubenswrapper[7518]: I0313 12:44:51.542689 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:52.544093 master-0 kubenswrapper[7518]: I0313 12:44:52.543961 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:52.544093 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:52.544093 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:52.544093 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:52.544093 master-0 kubenswrapper[7518]: I0313 12:44:52.544075 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:53.542297 master-0 kubenswrapper[7518]: I0313 12:44:53.542209 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:53.542297 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:53.542297 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:53.542297 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:53.542297 master-0 kubenswrapper[7518]: I0313 12:44:53.542283 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:54.542758 master-0 kubenswrapper[7518]: I0313 12:44:54.542708 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:54.542758 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:54.542758 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:54.542758 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:54.543432 master-0 kubenswrapper[7518]: I0313 12:44:54.542788 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:55.541155 master-0 kubenswrapper[7518]: I0313 12:44:55.541084 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:55.541155 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:55.541155 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:55.541155 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:55.541470 master-0 kubenswrapper[7518]: I0313 12:44:55.541209 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:56.542911 master-0 kubenswrapper[7518]: I0313 12:44:56.542815 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:56.542911 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:56.542911 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:56.542911 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:56.542911 master-0 kubenswrapper[7518]: I0313 12:44:56.542900 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:56.598436 master-0 kubenswrapper[7518]: I0313 12:44:56.598360 7518 scope.go:117] "RemoveContainer" containerID="4045dec19d514a7cdc11bc9584aece668967f43e77e3659c49eadc29454d9d85" Mar 13 12:44:56.598709 master-0 kubenswrapper[7518]: E0313 12:44:56.598692 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-ckl2j_openshift-ingress-operator(2f79578c-bbfb-4968-893a-730deb4c01f9)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" podUID="2f79578c-bbfb-4968-893a-730deb4c01f9" Mar 13 12:44:57.541118 master-0 kubenswrapper[7518]: I0313 12:44:57.541043 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:57.541118 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:57.541118 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:57.541118 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:57.541397 master-0 kubenswrapper[7518]: I0313 12:44:57.541130 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:58.541864 master-0 kubenswrapper[7518]: I0313 12:44:58.541790 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:58.541864 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:58.541864 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:58.541864 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:58.541864 master-0 kubenswrapper[7518]: I0313 12:44:58.541858 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:44:59.542061 master-0 kubenswrapper[7518]: I0313 12:44:59.541996 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:44:59.542061 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:44:59.542061 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:44:59.542061 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:44:59.545020 master-0 kubenswrapper[7518]: I0313 12:44:59.544237 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:00.541564 master-0 kubenswrapper[7518]: I0313 12:45:00.541425 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:00.541564 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:00.541564 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:00.541564 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:00.541564 master-0 kubenswrapper[7518]: I0313 12:45:00.541499 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:01.541599 master-0 kubenswrapper[7518]: I0313 12:45:01.541531 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:01.541599 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:01.541599 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:01.541599 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:01.542199 master-0 kubenswrapper[7518]: I0313 12:45:01.541611 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:02.543068 master-0 kubenswrapper[7518]: I0313 12:45:02.543000 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:02.543068 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:02.543068 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:02.543068 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:02.543736 master-0 kubenswrapper[7518]: I0313 12:45:02.543095 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:03.542790 master-0 kubenswrapper[7518]: I0313 12:45:03.542712 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:03.542790 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:03.542790 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:03.542790 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:03.543118 master-0 kubenswrapper[7518]: I0313 12:45:03.542809 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:04.541949 master-0 kubenswrapper[7518]: I0313 12:45:04.541854 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:04.541949 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:04.541949 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:04.541949 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:04.542313 master-0 kubenswrapper[7518]: I0313 12:45:04.541972 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:05.541585 master-0 kubenswrapper[7518]: I0313 12:45:05.541489 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:05.541585 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:05.541585 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:05.541585 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:05.541585 master-0 kubenswrapper[7518]: I0313 12:45:05.541561 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:06.542015 master-0 kubenswrapper[7518]: I0313 12:45:06.541964 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:06.542015 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:06.542015 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:06.542015 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:06.542644 master-0 kubenswrapper[7518]: I0313 12:45:06.542038 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:07.541605 master-0 kubenswrapper[7518]: I0313 12:45:07.541542 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:07.541605 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:07.541605 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:07.541605 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:07.541605 master-0 kubenswrapper[7518]: I0313 12:45:07.541599 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:08.542177 master-0 kubenswrapper[7518]: I0313 12:45:08.542072 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:08.542177 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:08.542177 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:08.542177 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:08.543068 master-0 kubenswrapper[7518]: I0313 12:45:08.542193 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:09.541871 master-0 kubenswrapper[7518]: I0313 12:45:09.541807 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:09.541871 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:09.541871 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:09.541871 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:09.542642 master-0 kubenswrapper[7518]: I0313 12:45:09.541896 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:10.542082 master-0 kubenswrapper[7518]: I0313 12:45:10.541991 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:10.542082 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:10.542082 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:10.542082 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:10.542976 master-0 kubenswrapper[7518]: I0313 12:45:10.542126 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:10.598664 master-0 kubenswrapper[7518]: I0313 12:45:10.598602 7518 scope.go:117] "RemoveContainer" containerID="4045dec19d514a7cdc11bc9584aece668967f43e77e3659c49eadc29454d9d85" Mar 13 12:45:11.539994 master-0 kubenswrapper[7518]: I0313 12:45:11.539943 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/2.log" Mar 13 12:45:11.540495 master-0 kubenswrapper[7518]: I0313 12:45:11.540455 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" event={"ID":"2f79578c-bbfb-4968-893a-730deb4c01f9","Type":"ContainerStarted","Data":"ae4dbec7c141edff956f746a70905658efa772c8e6c87f546534e12c26343588"} Mar 13 12:45:11.542792 master-0 kubenswrapper[7518]: I0313 12:45:11.542748 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:11.542792 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:11.542792 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:11.542792 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:11.543408 master-0 kubenswrapper[7518]: I0313 12:45:11.542812 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:12.542553 master-0 kubenswrapper[7518]: I0313 12:45:12.542483 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:12.542553 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:12.542553 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:12.542553 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:12.542848 master-0 kubenswrapper[7518]: I0313 12:45:12.542551 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:13.541918 master-0 kubenswrapper[7518]: I0313 12:45:13.541822 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:13.541918 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:13.541918 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:13.541918 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:13.541918 master-0 kubenswrapper[7518]: I0313 12:45:13.541915 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:14.541798 master-0 kubenswrapper[7518]: I0313 12:45:14.541695 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:14.541798 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:14.541798 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:14.541798 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:14.542462 master-0 kubenswrapper[7518]: I0313 12:45:14.541801 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:15.541541 master-0 kubenswrapper[7518]: I0313 12:45:15.541469 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:15.541541 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:15.541541 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:15.541541 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:15.542259 master-0 kubenswrapper[7518]: I0313 12:45:15.541566 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:16.541562 master-0 kubenswrapper[7518]: I0313 12:45:16.541464 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:16.541562 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:16.541562 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:16.541562 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:16.542249 master-0 kubenswrapper[7518]: I0313 12:45:16.541649 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:17.542763 master-0 kubenswrapper[7518]: I0313 12:45:17.542708 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:17.542763 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:17.542763 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:17.542763 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:17.543837 master-0 kubenswrapper[7518]: I0313 12:45:17.542833 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:18.541376 master-0 kubenswrapper[7518]: I0313 12:45:18.541315 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:18.541376 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:18.541376 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:18.541376 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:18.541729 master-0 kubenswrapper[7518]: I0313 12:45:18.541402 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:19.541387 master-0 kubenswrapper[7518]: I0313 12:45:19.541329 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:19.541387 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:19.541387 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:19.541387 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:19.542106 master-0 kubenswrapper[7518]: I0313 12:45:19.541410 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:20.546120 master-0 kubenswrapper[7518]: I0313 12:45:20.546050 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:20.546120 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:20.546120 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:20.546120 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:20.546792 master-0 kubenswrapper[7518]: I0313 12:45:20.546169 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:21.542010 master-0 kubenswrapper[7518]: I0313 12:45:21.541927 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:21.542010 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:21.542010 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:21.542010 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:21.542010 master-0 kubenswrapper[7518]: I0313 12:45:21.541996 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:22.541977 master-0 kubenswrapper[7518]: I0313 12:45:22.541911 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:22.541977 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:22.541977 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:22.541977 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:22.542580 master-0 kubenswrapper[7518]: I0313 12:45:22.541992 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:23.541822 master-0 kubenswrapper[7518]: I0313 12:45:23.541724 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:23.541822 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:23.541822 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:23.541822 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:23.542662 master-0 kubenswrapper[7518]: I0313 12:45:23.541841 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:24.541747 master-0 kubenswrapper[7518]: I0313 12:45:24.541689 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:24.541747 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:24.541747 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:24.541747 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:24.541997 master-0 kubenswrapper[7518]: I0313 12:45:24.541754 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:25.541579 master-0 kubenswrapper[7518]: I0313 12:45:25.541463 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:25.541579 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:25.541579 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:25.541579 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:25.542785 master-0 kubenswrapper[7518]: I0313 12:45:25.541583 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:26.541849 master-0 kubenswrapper[7518]: I0313 12:45:26.541735 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:26.541849 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:26.541849 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:26.541849 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:26.542684 master-0 kubenswrapper[7518]: I0313 12:45:26.541852 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:27.541518 master-0 kubenswrapper[7518]: I0313 12:45:27.541443 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:27.541518 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:27.541518 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:27.541518 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:27.541768 master-0 kubenswrapper[7518]: I0313 12:45:27.541532 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:28.540567 master-0 kubenswrapper[7518]: I0313 12:45:28.540502 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:28.540567 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:28.540567 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:28.540567 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:28.540567 master-0 kubenswrapper[7518]: I0313 12:45:28.540563 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:29.542255 master-0 kubenswrapper[7518]: I0313 12:45:29.542186 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:29.542255 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:29.542255 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:29.542255 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:29.543486 master-0 kubenswrapper[7518]: I0313 12:45:29.542275 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:30.542292 master-0 kubenswrapper[7518]: I0313 12:45:30.542233 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:30.542292 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:30.542292 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:30.542292 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:30.543181 master-0 kubenswrapper[7518]: I0313 12:45:30.543124 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:31.542676 master-0 kubenswrapper[7518]: I0313 12:45:31.542574 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:31.542676 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:31.542676 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:31.542676 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:31.543735 master-0 kubenswrapper[7518]: I0313 12:45:31.542671 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:32.542122 master-0 kubenswrapper[7518]: I0313 12:45:32.542043 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:32.542122 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:32.542122 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:32.542122 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:32.542521 master-0 kubenswrapper[7518]: I0313 12:45:32.542182 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:33.541604 master-0 kubenswrapper[7518]: I0313 12:45:33.541522 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:33.541604 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:33.541604 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:33.541604 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:33.542387 master-0 kubenswrapper[7518]: I0313 12:45:33.541626 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:34.540971 master-0 kubenswrapper[7518]: I0313 12:45:34.540922 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:34.540971 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:34.540971 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:34.540971 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:34.541336 master-0 kubenswrapper[7518]: I0313 12:45:34.540998 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:35.541912 master-0 kubenswrapper[7518]: I0313 12:45:35.541851 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:35.541912 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:35.541912 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:35.541912 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:35.542647 master-0 kubenswrapper[7518]: I0313 12:45:35.541920 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:36.541716 master-0 kubenswrapper[7518]: I0313 12:45:36.541660 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:36.541716 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:36.541716 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:36.541716 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:36.542508 master-0 kubenswrapper[7518]: I0313 12:45:36.541729 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:37.541243 master-0 kubenswrapper[7518]: I0313 12:45:37.541194 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:37.541243 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:37.541243 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:37.541243 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:37.541494 master-0 kubenswrapper[7518]: I0313 12:45:37.541264 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:38.542384 master-0 kubenswrapper[7518]: I0313 12:45:38.542315 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:38.542384 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:38.542384 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:38.542384 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:38.543125 master-0 kubenswrapper[7518]: I0313 12:45:38.542388 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:38.818032 master-0 kubenswrapper[7518]: I0313 12:45:38.817904 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 13 12:45:38.819079 master-0 kubenswrapper[7518]: I0313 12:45:38.819051 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 12:45:38.821764 master-0 kubenswrapper[7518]: I0313 12:45:38.821729 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-cbvsl" Mar 13 12:45:38.822731 master-0 kubenswrapper[7518]: I0313 12:45:38.822686 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 13 12:45:38.832904 master-0 kubenswrapper[7518]: I0313 12:45:38.832837 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 13 12:45:38.983175 master-0 kubenswrapper[7518]: I0313 12:45:38.983103 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e01de416-3de5-4357-a84e-f8eabb15a500-kube-api-access\") pod \"installer-2-master-0\" (UID: \"e01de416-3de5-4357-a84e-f8eabb15a500\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:45:38.983397 master-0 kubenswrapper[7518]: I0313 12:45:38.983191 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e01de416-3de5-4357-a84e-f8eabb15a500-var-lock\") pod \"installer-2-master-0\" (UID: \"e01de416-3de5-4357-a84e-f8eabb15a500\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:45:38.983397 master-0 kubenswrapper[7518]: I0313 12:45:38.983239 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e01de416-3de5-4357-a84e-f8eabb15a500-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"e01de416-3de5-4357-a84e-f8eabb15a500\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:45:39.084852 master-0 kubenswrapper[7518]: I0313 12:45:39.084735 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e01de416-3de5-4357-a84e-f8eabb15a500-kube-api-access\") pod \"installer-2-master-0\" (UID: \"e01de416-3de5-4357-a84e-f8eabb15a500\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:45:39.084852 master-0 kubenswrapper[7518]: I0313 12:45:39.084830 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e01de416-3de5-4357-a84e-f8eabb15a500-var-lock\") pod \"installer-2-master-0\" (UID: \"e01de416-3de5-4357-a84e-f8eabb15a500\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:45:39.085077 master-0 kubenswrapper[7518]: I0313 12:45:39.084861 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e01de416-3de5-4357-a84e-f8eabb15a500-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"e01de416-3de5-4357-a84e-f8eabb15a500\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:45:39.085077 master-0 kubenswrapper[7518]: I0313 12:45:39.085004 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e01de416-3de5-4357-a84e-f8eabb15a500-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"e01de416-3de5-4357-a84e-f8eabb15a500\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:45:39.085077 master-0 kubenswrapper[7518]: I0313 12:45:39.085012 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e01de416-3de5-4357-a84e-f8eabb15a500-var-lock\") pod \"installer-2-master-0\" (UID: \"e01de416-3de5-4357-a84e-f8eabb15a500\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:45:39.101314 master-0 kubenswrapper[7518]: I0313 12:45:39.101265 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e01de416-3de5-4357-a84e-f8eabb15a500-kube-api-access\") pod \"installer-2-master-0\" (UID: \"e01de416-3de5-4357-a84e-f8eabb15a500\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:45:39.146698 master-0 kubenswrapper[7518]: I0313 12:45:39.146650 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 12:45:39.542098 master-0 kubenswrapper[7518]: I0313 12:45:39.541830 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:39.542098 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:39.542098 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:39.542098 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:39.542420 master-0 kubenswrapper[7518]: I0313 12:45:39.542114 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:39.610537 master-0 kubenswrapper[7518]: I0313 12:45:39.610423 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 13 12:45:39.777494 master-0 kubenswrapper[7518]: I0313 12:45:39.777439 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"e01de416-3de5-4357-a84e-f8eabb15a500","Type":"ContainerStarted","Data":"3e81dca123a6f2f889ce66cb5735ec25a6e1c65abbd235bf8c5081fda6184b21"} Mar 13 12:45:40.542869 master-0 kubenswrapper[7518]: I0313 12:45:40.542795 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:40.542869 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:40.542869 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:40.542869 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:40.543688 master-0 kubenswrapper[7518]: I0313 12:45:40.542888 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:40.789090 master-0 kubenswrapper[7518]: I0313 12:45:40.789021 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"e01de416-3de5-4357-a84e-f8eabb15a500","Type":"ContainerStarted","Data":"36c8eace8178c56031aee9f74c55f1e387a62f97359664e0fd2729176c22f3cb"} Mar 13 12:45:40.811084 master-0 kubenswrapper[7518]: I0313 12:45:40.810917 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.810880788 podStartE2EDuration="2.810880788s" podCreationTimestamp="2026-03-13 12:45:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:45:40.809185264 +0000 UTC m=+495.442254471" watchObservedRunningTime="2026-03-13 12:45:40.810880788 +0000 UTC m=+495.443950005" Mar 13 12:45:41.542464 master-0 kubenswrapper[7518]: I0313 12:45:41.542377 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:41.542464 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:41.542464 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:41.542464 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:41.542464 master-0 kubenswrapper[7518]: I0313 12:45:41.542459 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:42.542266 master-0 kubenswrapper[7518]: I0313 12:45:42.542206 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:42.542266 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:42.542266 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:42.542266 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:42.542822 master-0 kubenswrapper[7518]: I0313 12:45:42.542284 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:43.541566 master-0 kubenswrapper[7518]: I0313 12:45:43.541489 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:43.541566 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:43.541566 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:43.541566 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:43.541918 master-0 kubenswrapper[7518]: I0313 12:45:43.541580 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:44.541987 master-0 kubenswrapper[7518]: I0313 12:45:44.541922 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:44.541987 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:44.541987 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:44.541987 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:44.542703 master-0 kubenswrapper[7518]: I0313 12:45:44.541994 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:45.541704 master-0 kubenswrapper[7518]: I0313 12:45:45.541649 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:45.541704 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:45.541704 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:45.541704 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:45.541999 master-0 kubenswrapper[7518]: I0313 12:45:45.541735 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:46.541703 master-0 kubenswrapper[7518]: I0313 12:45:46.541627 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:46.541703 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:46.541703 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:46.541703 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:46.541703 master-0 kubenswrapper[7518]: I0313 12:45:46.541697 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:47.541812 master-0 kubenswrapper[7518]: I0313 12:45:47.541768 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:47.541812 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:47.541812 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:47.541812 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:47.542601 master-0 kubenswrapper[7518]: I0313 12:45:47.542569 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:48.543004 master-0 kubenswrapper[7518]: I0313 12:45:48.542893 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:48.543004 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:48.543004 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:48.543004 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:48.544014 master-0 kubenswrapper[7518]: I0313 12:45:48.543104 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:49.542613 master-0 kubenswrapper[7518]: I0313 12:45:49.542544 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:49.542613 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:49.542613 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:49.542613 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:49.542908 master-0 kubenswrapper[7518]: I0313 12:45:49.542658 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:50.541495 master-0 kubenswrapper[7518]: I0313 12:45:50.541425 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:50.541495 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:50.541495 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:50.541495 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:50.542464 master-0 kubenswrapper[7518]: I0313 12:45:50.541513 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:51.541302 master-0 kubenswrapper[7518]: I0313 12:45:51.541247 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:51.541302 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:51.541302 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:51.541302 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:51.542026 master-0 kubenswrapper[7518]: I0313 12:45:51.541318 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:52.542815 master-0 kubenswrapper[7518]: I0313 12:45:52.542755 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:52.542815 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:52.542815 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:52.542815 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:52.543364 master-0 kubenswrapper[7518]: I0313 12:45:52.542843 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:53.541132 master-0 kubenswrapper[7518]: I0313 12:45:53.541069 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:53.541132 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:53.541132 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:53.541132 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:53.541132 master-0 kubenswrapper[7518]: I0313 12:45:53.541150 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:54.541492 master-0 kubenswrapper[7518]: I0313 12:45:54.541425 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:54.541492 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:54.541492 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:54.541492 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:54.542080 master-0 kubenswrapper[7518]: I0313 12:45:54.541500 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:55.541323 master-0 kubenswrapper[7518]: I0313 12:45:55.541269 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:55.541323 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:55.541323 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:55.541323 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:55.541891 master-0 kubenswrapper[7518]: I0313 12:45:55.541332 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:56.541425 master-0 kubenswrapper[7518]: I0313 12:45:56.541386 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:56.541425 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:56.541425 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:56.541425 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:56.542045 master-0 kubenswrapper[7518]: I0313 12:45:56.542010 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:57.541559 master-0 kubenswrapper[7518]: I0313 12:45:57.541500 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:57.541559 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:57.541559 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:57.541559 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:57.542240 master-0 kubenswrapper[7518]: I0313 12:45:57.541565 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:57.909079 master-0 kubenswrapper[7518]: I0313 12:45:57.908962 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8b998ff89-g8rgp"] Mar 13 12:45:57.909588 master-0 kubenswrapper[7518]: I0313 12:45:57.909472 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" podUID="b21ecc52-8c8f-43de-84bb-13bd8eb305b6" containerName="controller-manager" containerID="cri-o://5fed8a223c8bd85462011864e93e488b62bf27c2022fb6a3984d126a69212081" gracePeriod=30 Mar 13 12:45:57.934210 master-0 kubenswrapper[7518]: I0313 12:45:57.934099 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5"] Mar 13 12:45:57.935036 master-0 kubenswrapper[7518]: I0313 12:45:57.934408 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" podUID="3ffc8a7e-5a23-4600-bfbe-c723501fa8cd" containerName="route-controller-manager" containerID="cri-o://1bcda9a5c5fb5278bd2cbc52137bb8493c2a066cfbf0a4cfef7a0c96ad56b754" gracePeriod=30 Mar 13 12:45:58.304933 master-0 kubenswrapper[7518]: I0313 12:45:58.304891 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:45:58.344161 master-0 kubenswrapper[7518]: I0313 12:45:58.339243 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-serving-cert\") pod \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " Mar 13 12:45:58.344161 master-0 kubenswrapper[7518]: I0313 12:45:58.339320 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-config\") pod \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " Mar 13 12:45:58.344161 master-0 kubenswrapper[7518]: I0313 12:45:58.339369 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-client-ca\") pod \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " Mar 13 12:45:58.344161 master-0 kubenswrapper[7518]: I0313 12:45:58.339419 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgzfd\" (UniqueName: \"kubernetes.io/projected/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-kube-api-access-tgzfd\") pod \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " Mar 13 12:45:58.344161 master-0 kubenswrapper[7518]: I0313 12:45:58.339452 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-proxy-ca-bundles\") pod \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\" (UID: \"b21ecc52-8c8f-43de-84bb-13bd8eb305b6\") " Mar 13 12:45:58.344161 master-0 kubenswrapper[7518]: I0313 12:45:58.340247 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b21ecc52-8c8f-43de-84bb-13bd8eb305b6" (UID: "b21ecc52-8c8f-43de-84bb-13bd8eb305b6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:45:58.351125 master-0 kubenswrapper[7518]: I0313 12:45:58.345113 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-client-ca" (OuterVolumeSpecName: "client-ca") pod "b21ecc52-8c8f-43de-84bb-13bd8eb305b6" (UID: "b21ecc52-8c8f-43de-84bb-13bd8eb305b6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:45:58.351125 master-0 kubenswrapper[7518]: I0313 12:45:58.345772 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-config" (OuterVolumeSpecName: "config") pod "b21ecc52-8c8f-43de-84bb-13bd8eb305b6" (UID: "b21ecc52-8c8f-43de-84bb-13bd8eb305b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:45:58.351255 master-0 kubenswrapper[7518]: I0313 12:45:58.351178 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b21ecc52-8c8f-43de-84bb-13bd8eb305b6" (UID: "b21ecc52-8c8f-43de-84bb-13bd8eb305b6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:45:58.357156 master-0 kubenswrapper[7518]: I0313 12:45:58.354718 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-kube-api-access-tgzfd" (OuterVolumeSpecName: "kube-api-access-tgzfd") pod "b21ecc52-8c8f-43de-84bb-13bd8eb305b6" (UID: "b21ecc52-8c8f-43de-84bb-13bd8eb305b6"). InnerVolumeSpecName "kube-api-access-tgzfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:45:58.392894 master-0 kubenswrapper[7518]: I0313 12:45:58.392857 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:45:58.440584 master-0 kubenswrapper[7518]: I0313 12:45:58.440514 7518 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 12:45:58.441641 master-0 kubenswrapper[7518]: I0313 12:45:58.440612 7518 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:45:58.441641 master-0 kubenswrapper[7518]: I0313 12:45:58.440625 7518 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:45:58.441641 master-0 kubenswrapper[7518]: I0313 12:45:58.440634 7518 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:45:58.441641 master-0 kubenswrapper[7518]: I0313 12:45:58.440646 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgzfd\" (UniqueName: \"kubernetes.io/projected/b21ecc52-8c8f-43de-84bb-13bd8eb305b6-kube-api-access-tgzfd\") on node \"master-0\" DevicePath \"\"" Mar 13 12:45:58.514349 master-0 kubenswrapper[7518]: I0313 12:45:58.513397 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh"] Mar 13 12:45:58.514349 master-0 kubenswrapper[7518]: E0313 12:45:58.513697 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffc8a7e-5a23-4600-bfbe-c723501fa8cd" containerName="route-controller-manager" Mar 13 12:45:58.514349 master-0 kubenswrapper[7518]: I0313 12:45:58.513721 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffc8a7e-5a23-4600-bfbe-c723501fa8cd" containerName="route-controller-manager" Mar 13 12:45:58.514349 master-0 kubenswrapper[7518]: E0313 12:45:58.513752 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b21ecc52-8c8f-43de-84bb-13bd8eb305b6" containerName="controller-manager" Mar 13 12:45:58.514349 master-0 kubenswrapper[7518]: I0313 12:45:58.513761 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21ecc52-8c8f-43de-84bb-13bd8eb305b6" containerName="controller-manager" Mar 13 12:45:58.514349 master-0 kubenswrapper[7518]: I0313 12:45:58.513923 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="b21ecc52-8c8f-43de-84bb-13bd8eb305b6" containerName="controller-manager" Mar 13 12:45:58.514349 master-0 kubenswrapper[7518]: I0313 12:45:58.513941 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffc8a7e-5a23-4600-bfbe-c723501fa8cd" containerName="route-controller-manager" Mar 13 12:45:58.514724 master-0 kubenswrapper[7518]: I0313 12:45:58.514508 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.520469 master-0 kubenswrapper[7518]: I0313 12:45:58.520437 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-grrfm" Mar 13 12:45:58.532106 master-0 kubenswrapper[7518]: I0313 12:45:58.531403 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh"] Mar 13 12:45:58.542316 master-0 kubenswrapper[7518]: I0313 12:45:58.542284 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-config\") pod \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " Mar 13 12:45:58.542839 master-0 kubenswrapper[7518]: I0313 12:45:58.542823 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-client-ca\") pod \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " Mar 13 12:45:58.542924 master-0 kubenswrapper[7518]: I0313 12:45:58.542911 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5zs6\" (UniqueName: \"kubernetes.io/projected/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-kube-api-access-t5zs6\") pod \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " Mar 13 12:45:58.543013 master-0 kubenswrapper[7518]: I0313 12:45:58.543001 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-serving-cert\") pod \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\" (UID: \"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd\") " Mar 13 12:45:58.543203 master-0 kubenswrapper[7518]: I0313 12:45:58.543186 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn8f2\" (UniqueName: \"kubernetes.io/projected/a454234a-6c8e-4916-81e8-c9e66cec9d31-kube-api-access-kn8f2\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.543327 master-0 kubenswrapper[7518]: I0313 12:45:58.543314 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-client-ca\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.543417 master-0 kubenswrapper[7518]: I0313 12:45:58.543403 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a454234a-6c8e-4916-81e8-c9e66cec9d31-serving-cert\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.543507 master-0 kubenswrapper[7518]: I0313 12:45:58.543493 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-config\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.543606 master-0 kubenswrapper[7518]: I0313 12:45:58.543585 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-proxy-ca-bundles\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.543732 master-0 kubenswrapper[7518]: I0313 12:45:58.542832 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-config" (OuterVolumeSpecName: "config") pod "3ffc8a7e-5a23-4600-bfbe-c723501fa8cd" (UID: "3ffc8a7e-5a23-4600-bfbe-c723501fa8cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:45:58.543803 master-0 kubenswrapper[7518]: I0313 12:45:58.543332 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-client-ca" (OuterVolumeSpecName: "client-ca") pod "3ffc8a7e-5a23-4600-bfbe-c723501fa8cd" (UID: "3ffc8a7e-5a23-4600-bfbe-c723501fa8cd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:45:58.543979 master-0 kubenswrapper[7518]: I0313 12:45:58.543964 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:58.543979 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:58.543979 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:58.543979 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:58.544189 master-0 kubenswrapper[7518]: I0313 12:45:58.544120 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:58.545425 master-0 kubenswrapper[7518]: I0313 12:45:58.545393 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-kube-api-access-t5zs6" (OuterVolumeSpecName: "kube-api-access-t5zs6") pod "3ffc8a7e-5a23-4600-bfbe-c723501fa8cd" (UID: "3ffc8a7e-5a23-4600-bfbe-c723501fa8cd"). InnerVolumeSpecName "kube-api-access-t5zs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:45:58.545872 master-0 kubenswrapper[7518]: I0313 12:45:58.545839 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3ffc8a7e-5a23-4600-bfbe-c723501fa8cd" (UID: "3ffc8a7e-5a23-4600-bfbe-c723501fa8cd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:45:58.568047 master-0 kubenswrapper[7518]: I0313 12:45:58.567615 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw"] Mar 13 12:45:58.568540 master-0 kubenswrapper[7518]: I0313 12:45:58.568508 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:45:58.571128 master-0 kubenswrapper[7518]: I0313 12:45:58.570934 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-sk2p7" Mar 13 12:45:58.575854 master-0 kubenswrapper[7518]: I0313 12:45:58.575538 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw"] Mar 13 12:45:58.644561 master-0 kubenswrapper[7518]: I0313 12:45:58.644499 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-proxy-ca-bundles\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.644780 master-0 kubenswrapper[7518]: I0313 12:45:58.644585 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmzhw\" (UniqueName: \"kubernetes.io/projected/18ffa620-dacc-4b09-be04-2c325f860813-kube-api-access-fmzhw\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:45:58.644825 master-0 kubenswrapper[7518]: I0313 12:45:58.644793 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ffa620-dacc-4b09-be04-2c325f860813-serving-cert\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:45:58.644985 master-0 kubenswrapper[7518]: I0313 12:45:58.644900 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn8f2\" (UniqueName: \"kubernetes.io/projected/a454234a-6c8e-4916-81e8-c9e66cec9d31-kube-api-access-kn8f2\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.645450 master-0 kubenswrapper[7518]: I0313 12:45:58.645413 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-client-ca\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:45:58.645566 master-0 kubenswrapper[7518]: I0313 12:45:58.645531 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-client-ca\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.645629 master-0 kubenswrapper[7518]: I0313 12:45:58.645594 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a454234a-6c8e-4916-81e8-c9e66cec9d31-serving-cert\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.646201 master-0 kubenswrapper[7518]: I0313 12:45:58.646161 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-config\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:45:58.646288 master-0 kubenswrapper[7518]: I0313 12:45:58.646240 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-config\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.646334 master-0 kubenswrapper[7518]: I0313 12:45:58.646281 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-client-ca\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.646334 master-0 kubenswrapper[7518]: I0313 12:45:58.646173 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-proxy-ca-bundles\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.646494 master-0 kubenswrapper[7518]: I0313 12:45:58.646467 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5zs6\" (UniqueName: \"kubernetes.io/projected/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-kube-api-access-t5zs6\") on node \"master-0\" DevicePath \"\"" Mar 13 12:45:58.646494 master-0 kubenswrapper[7518]: I0313 12:45:58.646487 7518 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:45:58.646591 master-0 kubenswrapper[7518]: I0313 12:45:58.646501 7518 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:45:58.646591 master-0 kubenswrapper[7518]: I0313 12:45:58.646512 7518 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:45:58.648179 master-0 kubenswrapper[7518]: I0313 12:45:58.647837 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-config\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.648906 master-0 kubenswrapper[7518]: I0313 12:45:58.648871 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a454234a-6c8e-4916-81e8-c9e66cec9d31-serving-cert\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.660197 master-0 kubenswrapper[7518]: I0313 12:45:58.660158 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn8f2\" (UniqueName: \"kubernetes.io/projected/a454234a-6c8e-4916-81e8-c9e66cec9d31-kube-api-access-kn8f2\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.748215 master-0 kubenswrapper[7518]: I0313 12:45:58.748080 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmzhw\" (UniqueName: \"kubernetes.io/projected/18ffa620-dacc-4b09-be04-2c325f860813-kube-api-access-fmzhw\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:45:58.748215 master-0 kubenswrapper[7518]: I0313 12:45:58.748183 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ffa620-dacc-4b09-be04-2c325f860813-serving-cert\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:45:58.748552 master-0 kubenswrapper[7518]: I0313 12:45:58.748257 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-client-ca\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:45:58.748552 master-0 kubenswrapper[7518]: I0313 12:45:58.748308 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-config\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:45:58.749422 master-0 kubenswrapper[7518]: I0313 12:45:58.749304 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-client-ca\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:45:58.749920 master-0 kubenswrapper[7518]: I0313 12:45:58.749882 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-config\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:45:58.751943 master-0 kubenswrapper[7518]: I0313 12:45:58.751907 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ffa620-dacc-4b09-be04-2c325f860813-serving-cert\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:45:58.800863 master-0 kubenswrapper[7518]: I0313 12:45:58.800798 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmzhw\" (UniqueName: \"kubernetes.io/projected/18ffa620-dacc-4b09-be04-2c325f860813-kube-api-access-fmzhw\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:45:58.843361 master-0 kubenswrapper[7518]: I0313 12:45:58.843127 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:58.891580 master-0 kubenswrapper[7518]: I0313 12:45:58.891388 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:45:58.921023 master-0 kubenswrapper[7518]: I0313 12:45:58.920974 7518 generic.go:334] "Generic (PLEG): container finished" podID="3ffc8a7e-5a23-4600-bfbe-c723501fa8cd" containerID="1bcda9a5c5fb5278bd2cbc52137bb8493c2a066cfbf0a4cfef7a0c96ad56b754" exitCode=0 Mar 13 12:45:58.921119 master-0 kubenswrapper[7518]: I0313 12:45:58.921067 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" event={"ID":"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd","Type":"ContainerDied","Data":"1bcda9a5c5fb5278bd2cbc52137bb8493c2a066cfbf0a4cfef7a0c96ad56b754"} Mar 13 12:45:58.921119 master-0 kubenswrapper[7518]: I0313 12:45:58.921098 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" event={"ID":"3ffc8a7e-5a23-4600-bfbe-c723501fa8cd","Type":"ContainerDied","Data":"235ac172fba643a3622b4550bc85bdd02f44fcf97f67a04413c514187d6799f5"} Mar 13 12:45:58.921119 master-0 kubenswrapper[7518]: I0313 12:45:58.921115 7518 scope.go:117] "RemoveContainer" containerID="1bcda9a5c5fb5278bd2cbc52137bb8493c2a066cfbf0a4cfef7a0c96ad56b754" Mar 13 12:45:58.921289 master-0 kubenswrapper[7518]: I0313 12:45:58.921257 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5" Mar 13 12:45:58.925082 master-0 kubenswrapper[7518]: I0313 12:45:58.925048 7518 generic.go:334] "Generic (PLEG): container finished" podID="b21ecc52-8c8f-43de-84bb-13bd8eb305b6" containerID="5fed8a223c8bd85462011864e93e488b62bf27c2022fb6a3984d126a69212081" exitCode=0 Mar 13 12:45:58.925082 master-0 kubenswrapper[7518]: I0313 12:45:58.925080 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" event={"ID":"b21ecc52-8c8f-43de-84bb-13bd8eb305b6","Type":"ContainerDied","Data":"5fed8a223c8bd85462011864e93e488b62bf27c2022fb6a3984d126a69212081"} Mar 13 12:45:58.925277 master-0 kubenswrapper[7518]: I0313 12:45:58.925100 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" event={"ID":"b21ecc52-8c8f-43de-84bb-13bd8eb305b6","Type":"ContainerDied","Data":"bb145fda395c4e8c32ad5949ac0c69d58287605de6532ecc4ce10c6d8c224e53"} Mar 13 12:45:58.925277 master-0 kubenswrapper[7518]: I0313 12:45:58.925153 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b998ff89-g8rgp" Mar 13 12:45:58.948068 master-0 kubenswrapper[7518]: I0313 12:45:58.947434 7518 scope.go:117] "RemoveContainer" containerID="1bcda9a5c5fb5278bd2cbc52137bb8493c2a066cfbf0a4cfef7a0c96ad56b754" Mar 13 12:45:58.948740 master-0 kubenswrapper[7518]: E0313 12:45:58.948208 7518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bcda9a5c5fb5278bd2cbc52137bb8493c2a066cfbf0a4cfef7a0c96ad56b754\": container with ID starting with 1bcda9a5c5fb5278bd2cbc52137bb8493c2a066cfbf0a4cfef7a0c96ad56b754 not found: ID does not exist" containerID="1bcda9a5c5fb5278bd2cbc52137bb8493c2a066cfbf0a4cfef7a0c96ad56b754" Mar 13 12:45:58.948740 master-0 kubenswrapper[7518]: I0313 12:45:58.948288 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bcda9a5c5fb5278bd2cbc52137bb8493c2a066cfbf0a4cfef7a0c96ad56b754"} err="failed to get container status \"1bcda9a5c5fb5278bd2cbc52137bb8493c2a066cfbf0a4cfef7a0c96ad56b754\": rpc error: code = NotFound desc = could not find container \"1bcda9a5c5fb5278bd2cbc52137bb8493c2a066cfbf0a4cfef7a0c96ad56b754\": container with ID starting with 1bcda9a5c5fb5278bd2cbc52137bb8493c2a066cfbf0a4cfef7a0c96ad56b754 not found: ID does not exist" Mar 13 12:45:58.948740 master-0 kubenswrapper[7518]: I0313 12:45:58.948319 7518 scope.go:117] "RemoveContainer" containerID="5fed8a223c8bd85462011864e93e488b62bf27c2022fb6a3984d126a69212081" Mar 13 12:45:58.986230 master-0 kubenswrapper[7518]: I0313 12:45:58.986058 7518 scope.go:117] "RemoveContainer" containerID="5fed8a223c8bd85462011864e93e488b62bf27c2022fb6a3984d126a69212081" Mar 13 12:45:58.987456 master-0 kubenswrapper[7518]: E0313 12:45:58.987414 7518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fed8a223c8bd85462011864e93e488b62bf27c2022fb6a3984d126a69212081\": container with ID starting with 5fed8a223c8bd85462011864e93e488b62bf27c2022fb6a3984d126a69212081 not found: ID does not exist" containerID="5fed8a223c8bd85462011864e93e488b62bf27c2022fb6a3984d126a69212081" Mar 13 12:45:58.987639 master-0 kubenswrapper[7518]: I0313 12:45:58.987470 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fed8a223c8bd85462011864e93e488b62bf27c2022fb6a3984d126a69212081"} err="failed to get container status \"5fed8a223c8bd85462011864e93e488b62bf27c2022fb6a3984d126a69212081\": rpc error: code = NotFound desc = could not find container \"5fed8a223c8bd85462011864e93e488b62bf27c2022fb6a3984d126a69212081\": container with ID starting with 5fed8a223c8bd85462011864e93e488b62bf27c2022fb6a3984d126a69212081 not found: ID does not exist" Mar 13 12:45:58.998886 master-0 kubenswrapper[7518]: I0313 12:45:58.998836 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5"] Mar 13 12:45:59.005451 master-0 kubenswrapper[7518]: I0313 12:45:59.005404 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-784b8dc7f8-4czh5"] Mar 13 12:45:59.021741 master-0 kubenswrapper[7518]: I0313 12:45:59.021647 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8b998ff89-g8rgp"] Mar 13 12:45:59.025824 master-0 kubenswrapper[7518]: I0313 12:45:59.025772 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8b998ff89-g8rgp"] Mar 13 12:45:59.247096 master-0 kubenswrapper[7518]: I0313 12:45:59.246976 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh"] Mar 13 12:45:59.255560 master-0 kubenswrapper[7518]: W0313 12:45:59.255497 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda454234a_6c8e_4916_81e8_c9e66cec9d31.slice/crio-02db34ef289b2a257fb361c5e1190f74ebf2b35e8d2ff6177192f08616db19aa WatchSource:0}: Error finding container 02db34ef289b2a257fb361c5e1190f74ebf2b35e8d2ff6177192f08616db19aa: Status 404 returned error can't find the container with id 02db34ef289b2a257fb361c5e1190f74ebf2b35e8d2ff6177192f08616db19aa Mar 13 12:45:59.314451 master-0 kubenswrapper[7518]: I0313 12:45:59.314299 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw"] Mar 13 12:45:59.321634 master-0 kubenswrapper[7518]: W0313 12:45:59.321590 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18ffa620_dacc_4b09_be04_2c325f860813.slice/crio-4923fdf0bf7675fa9b87a52fcb37d82a429121c63cdefd19c58f0e547211a622 WatchSource:0}: Error finding container 4923fdf0bf7675fa9b87a52fcb37d82a429121c63cdefd19c58f0e547211a622: Status 404 returned error can't find the container with id 4923fdf0bf7675fa9b87a52fcb37d82a429121c63cdefd19c58f0e547211a622 Mar 13 12:45:59.541245 master-0 kubenswrapper[7518]: I0313 12:45:59.541193 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:45:59.541245 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:45:59.541245 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:45:59.541245 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:45:59.541514 master-0 kubenswrapper[7518]: I0313 12:45:59.541260 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:45:59.605948 master-0 kubenswrapper[7518]: I0313 12:45:59.605910 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ffc8a7e-5a23-4600-bfbe-c723501fa8cd" path="/var/lib/kubelet/pods/3ffc8a7e-5a23-4600-bfbe-c723501fa8cd/volumes" Mar 13 12:45:59.606566 master-0 kubenswrapper[7518]: I0313 12:45:59.606543 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b21ecc52-8c8f-43de-84bb-13bd8eb305b6" path="/var/lib/kubelet/pods/b21ecc52-8c8f-43de-84bb-13bd8eb305b6/volumes" Mar 13 12:45:59.933011 master-0 kubenswrapper[7518]: I0313 12:45:59.932888 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" event={"ID":"a454234a-6c8e-4916-81e8-c9e66cec9d31","Type":"ContainerStarted","Data":"f12fef74127c1c2b2f8ceb210e754cc92619ab36c1f145fe9d244f8d84cfb88c"} Mar 13 12:45:59.933011 master-0 kubenswrapper[7518]: I0313 12:45:59.932933 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" event={"ID":"a454234a-6c8e-4916-81e8-c9e66cec9d31","Type":"ContainerStarted","Data":"02db34ef289b2a257fb361c5e1190f74ebf2b35e8d2ff6177192f08616db19aa"} Mar 13 12:45:59.933350 master-0 kubenswrapper[7518]: I0313 12:45:59.933327 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:59.934929 master-0 kubenswrapper[7518]: I0313 12:45:59.934892 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" event={"ID":"18ffa620-dacc-4b09-be04-2c325f860813","Type":"ContainerStarted","Data":"bf5764c3d8fba8c40cba1931dc4f8b36f32584d349bb0fa8f02b7c483a7626de"} Mar 13 12:45:59.934999 master-0 kubenswrapper[7518]: I0313 12:45:59.934935 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" event={"ID":"18ffa620-dacc-4b09-be04-2c325f860813","Type":"ContainerStarted","Data":"4923fdf0bf7675fa9b87a52fcb37d82a429121c63cdefd19c58f0e547211a622"} Mar 13 12:45:59.935096 master-0 kubenswrapper[7518]: I0313 12:45:59.935075 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:45:59.939476 master-0 kubenswrapper[7518]: I0313 12:45:59.939437 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:45:59.964568 master-0 kubenswrapper[7518]: I0313 12:45:59.964501 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" podStartSLOduration=1.964486409 podStartE2EDuration="1.964486409s" podCreationTimestamp="2026-03-13 12:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:45:59.962278261 +0000 UTC m=+514.595347448" watchObservedRunningTime="2026-03-13 12:45:59.964486409 +0000 UTC m=+514.597555596" Mar 13 12:46:00.005837 master-0 kubenswrapper[7518]: I0313 12:46:00.005754 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" podStartSLOduration=2.005728467 podStartE2EDuration="2.005728467s" podCreationTimestamp="2026-03-13 12:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:46:00.001659941 +0000 UTC m=+514.634729148" watchObservedRunningTime="2026-03-13 12:46:00.005728467 +0000 UTC m=+514.638797664" Mar 13 12:46:00.065414 master-0 kubenswrapper[7518]: I0313 12:46:00.065363 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:46:00.540811 master-0 kubenswrapper[7518]: I0313 12:46:00.540753 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:00.540811 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:00.540811 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:00.540811 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:00.541205 master-0 kubenswrapper[7518]: I0313 12:46:00.540835 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:01.541345 master-0 kubenswrapper[7518]: I0313 12:46:01.541241 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:01.541345 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:01.541345 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:01.541345 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:01.542362 master-0 kubenswrapper[7518]: I0313 12:46:01.541350 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:02.541478 master-0 kubenswrapper[7518]: I0313 12:46:02.541427 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:02.541478 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:02.541478 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:02.541478 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:02.542054 master-0 kubenswrapper[7518]: I0313 12:46:02.542029 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:02.892061 master-0 kubenswrapper[7518]: I0313 12:46:02.891907 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 13 12:46:02.893032 master-0 kubenswrapper[7518]: I0313 12:46:02.893002 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:46:02.895166 master-0 kubenswrapper[7518]: I0313 12:46:02.895116 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 12:46:02.895511 master-0 kubenswrapper[7518]: I0313 12:46:02.895483 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-lgm6s" Mar 13 12:46:02.906401 master-0 kubenswrapper[7518]: I0313 12:46:02.906062 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 13 12:46:02.917206 master-0 kubenswrapper[7518]: I0313 12:46:02.917116 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f8543a5-1639-4140-a18d-8b0c96821bae-kube-api-access\") pod \"installer-4-master-0\" (UID: \"8f8543a5-1639-4140-a18d-8b0c96821bae\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:46:02.917369 master-0 kubenswrapper[7518]: I0313 12:46:02.917211 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f8543a5-1639-4140-a18d-8b0c96821bae-var-lock\") pod \"installer-4-master-0\" (UID: \"8f8543a5-1639-4140-a18d-8b0c96821bae\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:46:02.917369 master-0 kubenswrapper[7518]: I0313 12:46:02.917243 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f8543a5-1639-4140-a18d-8b0c96821bae-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"8f8543a5-1639-4140-a18d-8b0c96821bae\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:46:03.022469 master-0 kubenswrapper[7518]: I0313 12:46:03.022179 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f8543a5-1639-4140-a18d-8b0c96821bae-kube-api-access\") pod \"installer-4-master-0\" (UID: \"8f8543a5-1639-4140-a18d-8b0c96821bae\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:46:03.022469 master-0 kubenswrapper[7518]: I0313 12:46:03.022251 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f8543a5-1639-4140-a18d-8b0c96821bae-var-lock\") pod \"installer-4-master-0\" (UID: \"8f8543a5-1639-4140-a18d-8b0c96821bae\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:46:03.022469 master-0 kubenswrapper[7518]: I0313 12:46:03.022290 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f8543a5-1639-4140-a18d-8b0c96821bae-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"8f8543a5-1639-4140-a18d-8b0c96821bae\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:46:03.022469 master-0 kubenswrapper[7518]: I0313 12:46:03.022464 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f8543a5-1639-4140-a18d-8b0c96821bae-var-lock\") pod \"installer-4-master-0\" (UID: \"8f8543a5-1639-4140-a18d-8b0c96821bae\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:46:03.022987 master-0 kubenswrapper[7518]: I0313 12:46:03.022513 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f8543a5-1639-4140-a18d-8b0c96821bae-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"8f8543a5-1639-4140-a18d-8b0c96821bae\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:46:03.039661 master-0 kubenswrapper[7518]: I0313 12:46:03.039601 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f8543a5-1639-4140-a18d-8b0c96821bae-kube-api-access\") pod \"installer-4-master-0\" (UID: \"8f8543a5-1639-4140-a18d-8b0c96821bae\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:46:03.217512 master-0 kubenswrapper[7518]: I0313 12:46:03.217376 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:46:03.541424 master-0 kubenswrapper[7518]: I0313 12:46:03.541355 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:03.541424 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:03.541424 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:03.541424 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:03.542354 master-0 kubenswrapper[7518]: I0313 12:46:03.541449 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:04.280211 master-0 kubenswrapper[7518]: I0313 12:46:04.280179 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 13 12:46:04.283587 master-0 kubenswrapper[7518]: W0313 12:46:04.283547 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8f8543a5_1639_4140_a18d_8b0c96821bae.slice/crio-90d62dc62426f86839fab6dfcb69950974991422a3bbd33e6f3fd2c0bd1c8644 WatchSource:0}: Error finding container 90d62dc62426f86839fab6dfcb69950974991422a3bbd33e6f3fd2c0bd1c8644: Status 404 returned error can't find the container with id 90d62dc62426f86839fab6dfcb69950974991422a3bbd33e6f3fd2c0bd1c8644 Mar 13 12:46:04.542157 master-0 kubenswrapper[7518]: I0313 12:46:04.541803 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:04.542157 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:04.542157 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:04.542157 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:04.542157 master-0 kubenswrapper[7518]: I0313 12:46:04.541869 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:04.969995 master-0 kubenswrapper[7518]: I0313 12:46:04.969871 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"8f8543a5-1639-4140-a18d-8b0c96821bae","Type":"ContainerStarted","Data":"a813a663a398e05e616fe550c674646a6498ff5442d82cbd7adbf48594546e77"} Mar 13 12:46:04.970251 master-0 kubenswrapper[7518]: I0313 12:46:04.970232 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"8f8543a5-1639-4140-a18d-8b0c96821bae","Type":"ContainerStarted","Data":"90d62dc62426f86839fab6dfcb69950974991422a3bbd33e6f3fd2c0bd1c8644"} Mar 13 12:46:04.990344 master-0 kubenswrapper[7518]: I0313 12:46:04.990284 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=2.9902598129999998 podStartE2EDuration="2.990259813s" podCreationTimestamp="2026-03-13 12:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:46:04.986631799 +0000 UTC m=+519.619700986" watchObservedRunningTime="2026-03-13 12:46:04.990259813 +0000 UTC m=+519.623329000" Mar 13 12:46:05.541535 master-0 kubenswrapper[7518]: I0313 12:46:05.541481 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:05.541535 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:05.541535 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:05.541535 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:05.541971 master-0 kubenswrapper[7518]: I0313 12:46:05.541576 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:06.542129 master-0 kubenswrapper[7518]: I0313 12:46:06.542075 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:06.542129 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:06.542129 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:06.542129 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:06.543302 master-0 kubenswrapper[7518]: I0313 12:46:06.543262 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:07.542028 master-0 kubenswrapper[7518]: I0313 12:46:07.541959 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:07.542028 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:07.542028 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:07.542028 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:07.542829 master-0 kubenswrapper[7518]: I0313 12:46:07.542069 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:07.750882 master-0 kubenswrapper[7518]: I0313 12:46:07.750822 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zhgx2"] Mar 13 12:46:07.752261 master-0 kubenswrapper[7518]: I0313 12:46:07.752223 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:07.763493 master-0 kubenswrapper[7518]: I0313 12:46:07.763451 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-fmsrc" Mar 13 12:46:07.763948 master-0 kubenswrapper[7518]: I0313 12:46:07.763457 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 13 12:46:07.921462 master-0 kubenswrapper[7518]: I0313 12:46:07.921332 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e0bb348a-f72d-462e-aec9-04e4600cc7f0-ready\") pod \"cni-sysctl-allowlist-ds-zhgx2\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:07.921462 master-0 kubenswrapper[7518]: I0313 12:46:07.921414 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e0bb348a-f72d-462e-aec9-04e4600cc7f0-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-zhgx2\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:07.921705 master-0 kubenswrapper[7518]: I0313 12:46:07.921489 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e0bb348a-f72d-462e-aec9-04e4600cc7f0-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-zhgx2\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:07.921705 master-0 kubenswrapper[7518]: I0313 12:46:07.921525 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4szv8\" (UniqueName: \"kubernetes.io/projected/e0bb348a-f72d-462e-aec9-04e4600cc7f0-kube-api-access-4szv8\") pod \"cni-sysctl-allowlist-ds-zhgx2\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:08.267163 master-0 kubenswrapper[7518]: I0313 12:46:08.266650 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e0bb348a-f72d-462e-aec9-04e4600cc7f0-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-zhgx2\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:08.267163 master-0 kubenswrapper[7518]: I0313 12:46:08.266700 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4szv8\" (UniqueName: \"kubernetes.io/projected/e0bb348a-f72d-462e-aec9-04e4600cc7f0-kube-api-access-4szv8\") pod \"cni-sysctl-allowlist-ds-zhgx2\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:08.267163 master-0 kubenswrapper[7518]: I0313 12:46:08.266739 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e0bb348a-f72d-462e-aec9-04e4600cc7f0-ready\") pod \"cni-sysctl-allowlist-ds-zhgx2\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:08.267163 master-0 kubenswrapper[7518]: I0313 12:46:08.266879 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e0bb348a-f72d-462e-aec9-04e4600cc7f0-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-zhgx2\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:08.267163 master-0 kubenswrapper[7518]: I0313 12:46:08.266987 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e0bb348a-f72d-462e-aec9-04e4600cc7f0-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-zhgx2\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:08.268704 master-0 kubenswrapper[7518]: I0313 12:46:08.268033 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e0bb348a-f72d-462e-aec9-04e4600cc7f0-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-zhgx2\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:08.268704 master-0 kubenswrapper[7518]: I0313 12:46:08.268302 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e0bb348a-f72d-462e-aec9-04e4600cc7f0-ready\") pod \"cni-sysctl-allowlist-ds-zhgx2\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:08.308357 master-0 kubenswrapper[7518]: I0313 12:46:08.308325 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4szv8\" (UniqueName: \"kubernetes.io/projected/e0bb348a-f72d-462e-aec9-04e4600cc7f0-kube-api-access-4szv8\") pod \"cni-sysctl-allowlist-ds-zhgx2\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:08.459229 master-0 kubenswrapper[7518]: I0313 12:46:08.459147 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:08.481051 master-0 kubenswrapper[7518]: W0313 12:46:08.480991 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0bb348a_f72d_462e_aec9_04e4600cc7f0.slice/crio-31e14db18d25a3fa72927804a1ed2bc494c41afe50ce854c00d3136f6fbe374e WatchSource:0}: Error finding container 31e14db18d25a3fa72927804a1ed2bc494c41afe50ce854c00d3136f6fbe374e: Status 404 returned error can't find the container with id 31e14db18d25a3fa72927804a1ed2bc494c41afe50ce854c00d3136f6fbe374e Mar 13 12:46:08.543155 master-0 kubenswrapper[7518]: I0313 12:46:08.543024 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:08.543155 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:08.543155 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:08.543155 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:08.543155 master-0 kubenswrapper[7518]: I0313 12:46:08.543123 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:09.298373 master-0 kubenswrapper[7518]: I0313 12:46:09.298310 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" event={"ID":"e0bb348a-f72d-462e-aec9-04e4600cc7f0","Type":"ContainerStarted","Data":"c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46"} Mar 13 12:46:09.298678 master-0 kubenswrapper[7518]: I0313 12:46:09.298658 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" event={"ID":"e0bb348a-f72d-462e-aec9-04e4600cc7f0","Type":"ContainerStarted","Data":"31e14db18d25a3fa72927804a1ed2bc494c41afe50ce854c00d3136f6fbe374e"} Mar 13 12:46:09.300213 master-0 kubenswrapper[7518]: I0313 12:46:09.299262 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:09.315931 master-0 kubenswrapper[7518]: I0313 12:46:09.315852 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" podStartSLOduration=2.315828046 podStartE2EDuration="2.315828046s" podCreationTimestamp="2026-03-13 12:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:46:09.313497315 +0000 UTC m=+523.946566522" watchObservedRunningTime="2026-03-13 12:46:09.315828046 +0000 UTC m=+523.948897233" Mar 13 12:46:09.323431 master-0 kubenswrapper[7518]: I0313 12:46:09.323403 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:09.542080 master-0 kubenswrapper[7518]: I0313 12:46:09.541992 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:09.542080 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:09.542080 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:09.542080 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:09.542432 master-0 kubenswrapper[7518]: I0313 12:46:09.542087 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:09.737087 master-0 kubenswrapper[7518]: I0313 12:46:09.736909 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zhgx2"] Mar 13 12:46:10.541875 master-0 kubenswrapper[7518]: I0313 12:46:10.541807 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:10.541875 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:10.541875 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:10.541875 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:10.542167 master-0 kubenswrapper[7518]: I0313 12:46:10.541877 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:10.944752 master-0 kubenswrapper[7518]: I0313 12:46:10.944644 7518 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 13 12:46:10.951025 master-0 kubenswrapper[7518]: I0313 12:46:10.950974 7518 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 13 12:46:10.951401 master-0 kubenswrapper[7518]: E0313 12:46:10.951368 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 13 12:46:10.951449 master-0 kubenswrapper[7518]: I0313 12:46:10.951415 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 13 12:46:10.951449 master-0 kubenswrapper[7518]: E0313 12:46:10.951438 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 13 12:46:10.951506 master-0 kubenswrapper[7518]: I0313 12:46:10.951450 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 13 12:46:10.951506 master-0 kubenswrapper[7518]: E0313 12:46:10.951476 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 13 12:46:10.951506 master-0 kubenswrapper[7518]: I0313 12:46:10.951489 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 13 12:46:10.951590 master-0 kubenswrapper[7518]: E0313 12:46:10.951508 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 13 12:46:10.951590 master-0 kubenswrapper[7518]: I0313 12:46:10.951524 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 13 12:46:10.951590 master-0 kubenswrapper[7518]: E0313 12:46:10.951537 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 13 12:46:10.951590 master-0 kubenswrapper[7518]: I0313 12:46:10.951550 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 13 12:46:10.951724 master-0 kubenswrapper[7518]: E0313 12:46:10.951591 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 13 12:46:10.951724 master-0 kubenswrapper[7518]: I0313 12:46:10.951605 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 13 12:46:10.951724 master-0 kubenswrapper[7518]: E0313 12:46:10.951627 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 13 12:46:10.951724 master-0 kubenswrapper[7518]: I0313 12:46:10.951639 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 13 12:46:10.951724 master-0 kubenswrapper[7518]: E0313 12:46:10.951659 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 13 12:46:10.951724 master-0 kubenswrapper[7518]: I0313 12:46:10.951672 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 13 12:46:10.951924 master-0 kubenswrapper[7518]: I0313 12:46:10.951880 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 13 12:46:10.951924 master-0 kubenswrapper[7518]: I0313 12:46:10.951897 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 13 12:46:10.951990 master-0 kubenswrapper[7518]: I0313 12:46:10.951927 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 13 12:46:10.951990 master-0 kubenswrapper[7518]: I0313 12:46:10.951945 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 13 12:46:10.951990 master-0 kubenswrapper[7518]: I0313 12:46:10.951977 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 13 12:46:11.134366 master-0 kubenswrapper[7518]: I0313 12:46:11.134263 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.134366 master-0 kubenswrapper[7518]: I0313 12:46:11.134371 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.134643 master-0 kubenswrapper[7518]: I0313 12:46:11.134476 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.134677 master-0 kubenswrapper[7518]: I0313 12:46:11.134626 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.134799 master-0 kubenswrapper[7518]: I0313 12:46:11.134711 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.134954 master-0 kubenswrapper[7518]: I0313 12:46:11.134922 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.237247 master-0 kubenswrapper[7518]: I0313 12:46:11.236838 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.237443 master-0 kubenswrapper[7518]: I0313 12:46:11.237282 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.237443 master-0 kubenswrapper[7518]: I0313 12:46:11.237343 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.237443 master-0 kubenswrapper[7518]: I0313 12:46:11.237391 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.237594 master-0 kubenswrapper[7518]: I0313 12:46:11.237466 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.237594 master-0 kubenswrapper[7518]: I0313 12:46:11.237519 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.237594 master-0 kubenswrapper[7518]: I0313 12:46:11.237556 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.237750 master-0 kubenswrapper[7518]: I0313 12:46:11.237604 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.237750 master-0 kubenswrapper[7518]: I0313 12:46:11.237664 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.237750 master-0 kubenswrapper[7518]: I0313 12:46:11.237704 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.237750 master-0 kubenswrapper[7518]: I0313 12:46:11.237724 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.237907 master-0 kubenswrapper[7518]: I0313 12:46:11.237808 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:46:11.310682 master-0 kubenswrapper[7518]: I0313 12:46:11.310620 7518 generic.go:334] "Generic (PLEG): container finished" podID="e01de416-3de5-4357-a84e-f8eabb15a500" containerID="36c8eace8178c56031aee9f74c55f1e387a62f97359664e0fd2729176c22f3cb" exitCode=0 Mar 13 12:46:11.310896 master-0 kubenswrapper[7518]: I0313 12:46:11.310708 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"e01de416-3de5-4357-a84e-f8eabb15a500","Type":"ContainerDied","Data":"36c8eace8178c56031aee9f74c55f1e387a62f97359664e0fd2729176c22f3cb"} Mar 13 12:46:11.310896 master-0 kubenswrapper[7518]: I0313 12:46:11.310810 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" podUID="e0bb348a-f72d-462e-aec9-04e4600cc7f0" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46" gracePeriod=30 Mar 13 12:46:11.312984 master-0 kubenswrapper[7518]: I0313 12:46:11.311525 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" containerID="cri-o://440a66755130132d56907a45f85ff201a5b883c75b1e482675b4125de5018dda" gracePeriod=30 Mar 13 12:46:11.312984 master-0 kubenswrapper[7518]: I0313 12:46:11.311533 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" containerID="cri-o://acda021c5f7e7aff55c971e32cec50e25aa40113e66a45d15899959c993261a0" gracePeriod=30 Mar 13 12:46:11.312984 master-0 kubenswrapper[7518]: I0313 12:46:11.311580 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" containerID="cri-o://dd5289c2e065c63e076ef785f5c91f68426de016a332635418487df625eabea4" gracePeriod=30 Mar 13 12:46:11.312984 master-0 kubenswrapper[7518]: I0313 12:46:11.311526 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" containerID="cri-o://1ca6e8ace45a17d5da03ca182f6c2cf352582d5bc5b5d835a63329d11f8a8397" gracePeriod=30 Mar 13 12:46:11.312984 master-0 kubenswrapper[7518]: I0313 12:46:11.311570 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" containerID="cri-o://38c2e3b0f262510b515ee410dfc31f716307a3e0807eb4b5d3d5cc8d3c3c5ced" gracePeriod=30 Mar 13 12:46:11.541282 master-0 kubenswrapper[7518]: I0313 12:46:11.541232 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:11.541282 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:11.541282 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:11.541282 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:11.541649 master-0 kubenswrapper[7518]: I0313 12:46:11.541621 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:12.324701 master-0 kubenswrapper[7518]: I0313 12:46:12.324400 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 12:46:12.336426 master-0 kubenswrapper[7518]: I0313 12:46:12.327821 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 12:46:12.339404 master-0 kubenswrapper[7518]: I0313 12:46:12.339339 7518 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="dd5289c2e065c63e076ef785f5c91f68426de016a332635418487df625eabea4" exitCode=2 Mar 13 12:46:12.339404 master-0 kubenswrapper[7518]: I0313 12:46:12.339383 7518 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="440a66755130132d56907a45f85ff201a5b883c75b1e482675b4125de5018dda" exitCode=0 Mar 13 12:46:12.339404 master-0 kubenswrapper[7518]: I0313 12:46:12.339404 7518 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="acda021c5f7e7aff55c971e32cec50e25aa40113e66a45d15899959c993261a0" exitCode=2 Mar 13 12:46:12.562475 master-0 kubenswrapper[7518]: I0313 12:46:12.562105 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:12.562475 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:12.562475 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:12.562475 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:12.562475 master-0 kubenswrapper[7518]: I0313 12:46:12.562227 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:12.609773 master-0 kubenswrapper[7518]: I0313 12:46:12.609728 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 12:46:12.761591 master-0 kubenswrapper[7518]: I0313 12:46:12.761532 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e01de416-3de5-4357-a84e-f8eabb15a500-var-lock\") pod \"e01de416-3de5-4357-a84e-f8eabb15a500\" (UID: \"e01de416-3de5-4357-a84e-f8eabb15a500\") " Mar 13 12:46:12.761838 master-0 kubenswrapper[7518]: I0313 12:46:12.761614 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e01de416-3de5-4357-a84e-f8eabb15a500-kubelet-dir\") pod \"e01de416-3de5-4357-a84e-f8eabb15a500\" (UID: \"e01de416-3de5-4357-a84e-f8eabb15a500\") " Mar 13 12:46:12.761838 master-0 kubenswrapper[7518]: I0313 12:46:12.761705 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e01de416-3de5-4357-a84e-f8eabb15a500-kube-api-access\") pod \"e01de416-3de5-4357-a84e-f8eabb15a500\" (UID: \"e01de416-3de5-4357-a84e-f8eabb15a500\") " Mar 13 12:46:12.761838 master-0 kubenswrapper[7518]: I0313 12:46:12.761704 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e01de416-3de5-4357-a84e-f8eabb15a500-var-lock" (OuterVolumeSpecName: "var-lock") pod "e01de416-3de5-4357-a84e-f8eabb15a500" (UID: "e01de416-3de5-4357-a84e-f8eabb15a500"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:46:12.762002 master-0 kubenswrapper[7518]: I0313 12:46:12.761823 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e01de416-3de5-4357-a84e-f8eabb15a500-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e01de416-3de5-4357-a84e-f8eabb15a500" (UID: "e01de416-3de5-4357-a84e-f8eabb15a500"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:46:12.762393 master-0 kubenswrapper[7518]: I0313 12:46:12.762357 7518 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e01de416-3de5-4357-a84e-f8eabb15a500-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:12.762393 master-0 kubenswrapper[7518]: I0313 12:46:12.762386 7518 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e01de416-3de5-4357-a84e-f8eabb15a500-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:12.765523 master-0 kubenswrapper[7518]: I0313 12:46:12.765477 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e01de416-3de5-4357-a84e-f8eabb15a500-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e01de416-3de5-4357-a84e-f8eabb15a500" (UID: "e01de416-3de5-4357-a84e-f8eabb15a500"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:46:12.863899 master-0 kubenswrapper[7518]: I0313 12:46:12.863748 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e01de416-3de5-4357-a84e-f8eabb15a500-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:13.346636 master-0 kubenswrapper[7518]: I0313 12:46:13.346575 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"e01de416-3de5-4357-a84e-f8eabb15a500","Type":"ContainerDied","Data":"3e81dca123a6f2f889ce66cb5735ec25a6e1c65abbd235bf8c5081fda6184b21"} Mar 13 12:46:13.346636 master-0 kubenswrapper[7518]: I0313 12:46:13.346629 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e81dca123a6f2f889ce66cb5735ec25a6e1c65abbd235bf8c5081fda6184b21" Mar 13 12:46:13.346636 master-0 kubenswrapper[7518]: I0313 12:46:13.346635 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 12:46:13.541796 master-0 kubenswrapper[7518]: I0313 12:46:13.541716 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:13.541796 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:13.541796 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:13.541796 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:13.541796 master-0 kubenswrapper[7518]: I0313 12:46:13.541794 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:14.540720 master-0 kubenswrapper[7518]: I0313 12:46:14.540652 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:14.540720 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:14.540720 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:14.540720 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:14.541548 master-0 kubenswrapper[7518]: I0313 12:46:14.540713 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:15.542620 master-0 kubenswrapper[7518]: I0313 12:46:15.542531 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:15.542620 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:15.542620 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:15.542620 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:15.543669 master-0 kubenswrapper[7518]: I0313 12:46:15.542628 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:16.541396 master-0 kubenswrapper[7518]: I0313 12:46:16.541319 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:16.541396 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:16.541396 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:16.541396 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:16.541800 master-0 kubenswrapper[7518]: I0313 12:46:16.541408 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:17.541166 master-0 kubenswrapper[7518]: I0313 12:46:17.541086 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:17.541166 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:17.541166 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:17.541166 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:17.542048 master-0 kubenswrapper[7518]: I0313 12:46:17.541212 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:18.462298 master-0 kubenswrapper[7518]: E0313 12:46:18.462195 7518 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:46:18.463415 master-0 kubenswrapper[7518]: E0313 12:46:18.463342 7518 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:46:18.464580 master-0 kubenswrapper[7518]: E0313 12:46:18.464546 7518 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:46:18.464713 master-0 kubenswrapper[7518]: E0313 12:46:18.464584 7518 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" podUID="e0bb348a-f72d-462e-aec9-04e4600cc7f0" containerName="kube-multus-additional-cni-plugins" Mar 13 12:46:18.542655 master-0 kubenswrapper[7518]: I0313 12:46:18.542544 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:18.542655 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:18.542655 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:18.542655 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:18.542655 master-0 kubenswrapper[7518]: I0313 12:46:18.542635 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:19.541903 master-0 kubenswrapper[7518]: I0313 12:46:19.541841 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:19.541903 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:19.541903 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:19.541903 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:19.542205 master-0 kubenswrapper[7518]: I0313 12:46:19.541930 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:20.541722 master-0 kubenswrapper[7518]: I0313 12:46:20.541666 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:20.541722 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:20.541722 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:20.541722 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:20.542350 master-0 kubenswrapper[7518]: I0313 12:46:20.541732 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:21.541499 master-0 kubenswrapper[7518]: I0313 12:46:21.541438 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:21.541499 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:21.541499 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:21.541499 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:21.541763 master-0 kubenswrapper[7518]: I0313 12:46:21.541510 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:22.541614 master-0 kubenswrapper[7518]: I0313 12:46:22.541509 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:22.541614 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:22.541614 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:22.541614 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:22.541614 master-0 kubenswrapper[7518]: I0313 12:46:22.541609 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:23.541461 master-0 kubenswrapper[7518]: I0313 12:46:23.541321 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:23.541461 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:23.541461 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:23.541461 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:23.541461 master-0 kubenswrapper[7518]: I0313 12:46:23.541413 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:24.541980 master-0 kubenswrapper[7518]: I0313 12:46:24.541923 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:24.541980 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:24.541980 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:24.541980 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:24.541980 master-0 kubenswrapper[7518]: I0313 12:46:24.541975 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:25.216598 master-0 kubenswrapper[7518]: I0313 12:46:25.216506 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 12:46:25.428981 master-0 kubenswrapper[7518]: I0313 12:46:25.428931 7518 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="c81c73f91c343cb4e577286cc40722c4e25a55cd8b94b5a421c1eff5fabb3c61" exitCode=1 Mar 13 12:46:25.429275 master-0 kubenswrapper[7518]: I0313 12:46:25.428980 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"c81c73f91c343cb4e577286cc40722c4e25a55cd8b94b5a421c1eff5fabb3c61"} Mar 13 12:46:25.429464 master-0 kubenswrapper[7518]: I0313 12:46:25.429440 7518 scope.go:117] "RemoveContainer" containerID="696192325e102818ab8863a16ab52b3671d6dc3f225d1e0faf06a32633060bda" Mar 13 12:46:25.430123 master-0 kubenswrapper[7518]: I0313 12:46:25.430019 7518 scope.go:117] "RemoveContainer" containerID="c81c73f91c343cb4e577286cc40722c4e25a55cd8b94b5a421c1eff5fabb3c61" Mar 13 12:46:25.430507 master-0 kubenswrapper[7518]: E0313 12:46:25.430463 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:46:25.542058 master-0 kubenswrapper[7518]: I0313 12:46:25.541987 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:25.542058 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:25.542058 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:25.542058 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:25.543390 master-0 kubenswrapper[7518]: I0313 12:46:25.543320 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:25.971361 master-0 kubenswrapper[7518]: E0313 12:46:25.971261 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:46:15Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:46:15Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:46:15Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:46:15Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:46:26.543027 master-0 kubenswrapper[7518]: I0313 12:46:26.542943 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:26.543027 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:26.543027 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:26.543027 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:26.543872 master-0 kubenswrapper[7518]: I0313 12:46:26.543056 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:27.541172 master-0 kubenswrapper[7518]: I0313 12:46:27.541095 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:27.541172 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:27.541172 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:27.541172 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:27.541432 master-0 kubenswrapper[7518]: I0313 12:46:27.541190 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:28.005184 master-0 kubenswrapper[7518]: I0313 12:46:28.005032 7518 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:46:28.005738 master-0 kubenswrapper[7518]: I0313 12:46:28.005708 7518 scope.go:117] "RemoveContainer" containerID="c81c73f91c343cb4e577286cc40722c4e25a55cd8b94b5a421c1eff5fabb3c61" Mar 13 12:46:28.005992 master-0 kubenswrapper[7518]: E0313 12:46:28.005960 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:46:28.459737 master-0 kubenswrapper[7518]: I0313 12:46:28.459668 7518 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="74509294773fbb5f73a8dd8c9003ceebee4b1e194cad14d7465b52eca3b8eaab" exitCode=1 Mar 13 12:46:28.460029 master-0 kubenswrapper[7518]: I0313 12:46:28.459748 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"74509294773fbb5f73a8dd8c9003ceebee4b1e194cad14d7465b52eca3b8eaab"} Mar 13 12:46:28.460029 master-0 kubenswrapper[7518]: I0313 12:46:28.459824 7518 scope.go:117] "RemoveContainer" containerID="23aef1d459d801451207b22b103d82e16b0fb29eac9febd8e8918cd59b44679c" Mar 13 12:46:28.460475 master-0 kubenswrapper[7518]: I0313 12:46:28.460404 7518 scope.go:117] "RemoveContainer" containerID="74509294773fbb5f73a8dd8c9003ceebee4b1e194cad14d7465b52eca3b8eaab" Mar 13 12:46:28.460645 master-0 kubenswrapper[7518]: E0313 12:46:28.460612 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=bootstrap-kube-scheduler-master-0_kube-system(a1a56802af72ce1aac6b5077f1695ac0)\"" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="a1a56802af72ce1aac6b5077f1695ac0" Mar 13 12:46:28.462370 master-0 kubenswrapper[7518]: E0313 12:46:28.462202 7518 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:46:28.465447 master-0 kubenswrapper[7518]: E0313 12:46:28.465380 7518 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:46:28.466988 master-0 kubenswrapper[7518]: E0313 12:46:28.466928 7518 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:46:28.467112 master-0 kubenswrapper[7518]: E0313 12:46:28.466998 7518 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" podUID="e0bb348a-f72d-462e-aec9-04e4600cc7f0" containerName="kube-multus-additional-cni-plugins" Mar 13 12:46:28.542305 master-0 kubenswrapper[7518]: I0313 12:46:28.542227 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:28.542305 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:28.542305 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:28.542305 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:28.542605 master-0 kubenswrapper[7518]: I0313 12:46:28.542324 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:29.541590 master-0 kubenswrapper[7518]: I0313 12:46:29.541535 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:29.541590 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:29.541590 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:29.541590 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:29.542636 master-0 kubenswrapper[7518]: I0313 12:46:29.541605 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:30.423993 master-0 kubenswrapper[7518]: E0313 12:46:30.423829 7518 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:46:30.542010 master-0 kubenswrapper[7518]: I0313 12:46:30.541941 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:30.542010 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:30.542010 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:30.542010 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:30.542587 master-0 kubenswrapper[7518]: I0313 12:46:30.542033 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:31.541838 master-0 kubenswrapper[7518]: I0313 12:46:31.541769 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:31.541838 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:31.541838 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:31.541838 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:31.542524 master-0 kubenswrapper[7518]: I0313 12:46:31.541856 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:32.542409 master-0 kubenswrapper[7518]: I0313 12:46:32.542337 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:32.542409 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:32.542409 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:32.542409 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:32.542951 master-0 kubenswrapper[7518]: I0313 12:46:32.542430 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:33.542309 master-0 kubenswrapper[7518]: I0313 12:46:33.542243 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:33.542309 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:33.542309 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:33.542309 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:33.543314 master-0 kubenswrapper[7518]: I0313 12:46:33.542321 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:34.147326 master-0 kubenswrapper[7518]: I0313 12:46:34.147247 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:46:34.148268 master-0 kubenswrapper[7518]: I0313 12:46:34.148212 7518 scope.go:117] "RemoveContainer" containerID="c81c73f91c343cb4e577286cc40722c4e25a55cd8b94b5a421c1eff5fabb3c61" Mar 13 12:46:34.148680 master-0 kubenswrapper[7518]: E0313 12:46:34.148639 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:46:34.542044 master-0 kubenswrapper[7518]: I0313 12:46:34.541970 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:34.542044 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:34.542044 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:34.542044 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:34.542378 master-0 kubenswrapper[7518]: I0313 12:46:34.542058 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:35.216240 master-0 kubenswrapper[7518]: I0313 12:46:35.216171 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:46:35.217025 master-0 kubenswrapper[7518]: I0313 12:46:35.216978 7518 scope.go:117] "RemoveContainer" containerID="c81c73f91c343cb4e577286cc40722c4e25a55cd8b94b5a421c1eff5fabb3c61" Mar 13 12:46:35.217315 master-0 kubenswrapper[7518]: E0313 12:46:35.217275 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:46:35.541971 master-0 kubenswrapper[7518]: I0313 12:46:35.541870 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:35.541971 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:35.541971 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:35.541971 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:35.542426 master-0 kubenswrapper[7518]: I0313 12:46:35.542007 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:35.972620 master-0 kubenswrapper[7518]: E0313 12:46:35.972484 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:46:36.542619 master-0 kubenswrapper[7518]: I0313 12:46:36.542505 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:36.542619 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:36.542619 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:36.542619 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:36.542619 master-0 kubenswrapper[7518]: I0313 12:46:36.542610 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:37.541667 master-0 kubenswrapper[7518]: I0313 12:46:37.541609 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:37.541667 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:37.541667 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:37.541667 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:37.541955 master-0 kubenswrapper[7518]: I0313 12:46:37.541690 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:38.462049 master-0 kubenswrapper[7518]: E0313 12:46:38.461978 7518 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:46:38.463607 master-0 kubenswrapper[7518]: E0313 12:46:38.463541 7518 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:46:38.464622 master-0 kubenswrapper[7518]: E0313 12:46:38.464587 7518 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:46:38.464684 master-0 kubenswrapper[7518]: E0313 12:46:38.464620 7518 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" podUID="e0bb348a-f72d-462e-aec9-04e4600cc7f0" containerName="kube-multus-additional-cni-plugins" Mar 13 12:46:38.542424 master-0 kubenswrapper[7518]: I0313 12:46:38.542358 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:38.542424 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:38.542424 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:38.542424 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:38.542708 master-0 kubenswrapper[7518]: I0313 12:46:38.542431 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:39.541697 master-0 kubenswrapper[7518]: I0313 12:46:39.541621 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:46:39.541697 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:46:39.541697 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:46:39.541697 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:46:39.541697 master-0 kubenswrapper[7518]: I0313 12:46:39.541685 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:46:39.541697 master-0 kubenswrapper[7518]: I0313 12:46:39.541726 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:46:39.542531 master-0 kubenswrapper[7518]: I0313 12:46:39.542436 7518 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"1033e2108ac67b4d3f75cb158efc6594f949bbad75576abf1a2d8dbd850e968d"} pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" containerMessage="Container router failed startup probe, will be restarted" Mar 13 12:46:39.542531 master-0 kubenswrapper[7518]: I0313 12:46:39.542478 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" containerID="cri-o://1033e2108ac67b4d3f75cb158efc6594f949bbad75576abf1a2d8dbd850e968d" gracePeriod=3600 Mar 13 12:46:40.425494 master-0 kubenswrapper[7518]: E0313 12:46:40.425413 7518 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io master-0)" Mar 13 12:46:41.441559 master-0 kubenswrapper[7518]: E0313 12:46:41.441436 7518 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a56802af72ce1aac6b5077f1695ac0.slice/crio-74509294773fbb5f73a8dd8c9003ceebee4b1e194cad14d7465b52eca3b8eaab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0bb348a_f72d_462e_aec9_04e4600cc7f0.slice/crio-c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0bb348a_f72d_462e_aec9_04e4600cc7f0.slice/crio-conmon-c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a56802af72ce1aac6b5077f1695ac0.slice/crio-conmon-74509294773fbb5f73a8dd8c9003ceebee4b1e194cad14d7465b52eca3b8eaab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e52bef89f4b50e4590a1719bcc5d7e5.slice/crio-38c2e3b0f262510b515ee410dfc31f716307a3e0807eb4b5d3d5cc8d3c3c5ced.scope\": RecentStats: unable to find data in memory cache]" Mar 13 12:46:41.547353 master-0 kubenswrapper[7518]: I0313 12:46:41.547307 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 12:46:41.548126 master-0 kubenswrapper[7518]: I0313 12:46:41.548093 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 12:46:41.548757 master-0 kubenswrapper[7518]: I0313 12:46:41.548720 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 13 12:46:41.549214 master-0 kubenswrapper[7518]: I0313 12:46:41.549189 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 13 12:46:41.550234 master-0 kubenswrapper[7518]: I0313 12:46:41.550208 7518 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="38c2e3b0f262510b515ee410dfc31f716307a3e0807eb4b5d3d5cc8d3c3c5ced" exitCode=137 Mar 13 12:46:41.550234 master-0 kubenswrapper[7518]: I0313 12:46:41.550230 7518 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="1ca6e8ace45a17d5da03ca182f6c2cf352582d5bc5b5d835a63329d11f8a8397" exitCode=137 Mar 13 12:46:41.551968 master-0 kubenswrapper[7518]: I0313 12:46:41.551930 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-zhgx2_e0bb348a-f72d-462e-aec9-04e4600cc7f0/kube-multus-additional-cni-plugins/0.log" Mar 13 12:46:41.552042 master-0 kubenswrapper[7518]: I0313 12:46:41.551972 7518 generic.go:334] "Generic (PLEG): container finished" podID="e0bb348a-f72d-462e-aec9-04e4600cc7f0" containerID="c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46" exitCode=137 Mar 13 12:46:41.552042 master-0 kubenswrapper[7518]: I0313 12:46:41.552005 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" event={"ID":"e0bb348a-f72d-462e-aec9-04e4600cc7f0","Type":"ContainerDied","Data":"c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46"} Mar 13 12:46:42.182754 master-0 kubenswrapper[7518]: I0313 12:46:42.182708 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 12:46:42.183743 master-0 kubenswrapper[7518]: I0313 12:46:42.183709 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 12:46:42.184414 master-0 kubenswrapper[7518]: I0313 12:46:42.184388 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 13 12:46:42.184790 master-0 kubenswrapper[7518]: I0313 12:46:42.184765 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 13 12:46:42.185632 master-0 kubenswrapper[7518]: I0313 12:46:42.185605 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 12:46:42.189867 master-0 kubenswrapper[7518]: I0313 12:46:42.189836 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-zhgx2_e0bb348a-f72d-462e-aec9-04e4600cc7f0/kube-multus-additional-cni-plugins/0.log" Mar 13 12:46:42.189940 master-0 kubenswrapper[7518]: I0313 12:46:42.189884 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:42.349809 master-0 kubenswrapper[7518]: I0313 12:46:42.349631 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e0bb348a-f72d-462e-aec9-04e4600cc7f0-cni-sysctl-allowlist\") pod \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " Mar 13 12:46:42.349809 master-0 kubenswrapper[7518]: I0313 12:46:42.349693 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 12:46:42.349809 master-0 kubenswrapper[7518]: I0313 12:46:42.349807 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 12:46:42.350108 master-0 kubenswrapper[7518]: I0313 12:46:42.349842 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4szv8\" (UniqueName: \"kubernetes.io/projected/e0bb348a-f72d-462e-aec9-04e4600cc7f0-kube-api-access-4szv8\") pod \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " Mar 13 12:46:42.350108 master-0 kubenswrapper[7518]: I0313 12:46:42.349869 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:46:42.350108 master-0 kubenswrapper[7518]: I0313 12:46:42.349890 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 12:46:42.350108 master-0 kubenswrapper[7518]: I0313 12:46:42.349914 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:46:42.350108 master-0 kubenswrapper[7518]: I0313 12:46:42.349922 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e0bb348a-f72d-462e-aec9-04e4600cc7f0-tuning-conf-dir\") pod \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " Mar 13 12:46:42.350108 master-0 kubenswrapper[7518]: I0313 12:46:42.349952 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 12:46:42.350108 master-0 kubenswrapper[7518]: I0313 12:46:42.349992 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 12:46:42.350108 master-0 kubenswrapper[7518]: I0313 12:46:42.350018 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 12:46:42.350108 master-0 kubenswrapper[7518]: I0313 12:46:42.350042 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e0bb348a-f72d-462e-aec9-04e4600cc7f0-ready\") pod \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\" (UID: \"e0bb348a-f72d-462e-aec9-04e4600cc7f0\") " Mar 13 12:46:42.350631 master-0 kubenswrapper[7518]: I0313 12:46:42.350102 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:46:42.350631 master-0 kubenswrapper[7518]: I0313 12:46:42.350216 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:46:42.350631 master-0 kubenswrapper[7518]: I0313 12:46:42.350247 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0bb348a-f72d-462e-aec9-04e4600cc7f0-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "e0bb348a-f72d-462e-aec9-04e4600cc7f0" (UID: "e0bb348a-f72d-462e-aec9-04e4600cc7f0"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:46:42.350631 master-0 kubenswrapper[7518]: I0313 12:46:42.350272 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir" (OuterVolumeSpecName: "data-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:46:42.350631 master-0 kubenswrapper[7518]: I0313 12:46:42.350261 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0bb348a-f72d-462e-aec9-04e4600cc7f0-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "e0bb348a-f72d-462e-aec9-04e4600cc7f0" (UID: "e0bb348a-f72d-462e-aec9-04e4600cc7f0"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:46:42.350631 master-0 kubenswrapper[7518]: I0313 12:46:42.350311 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir" (OuterVolumeSpecName: "log-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:46:42.350631 master-0 kubenswrapper[7518]: I0313 12:46:42.350527 7518 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:42.350631 master-0 kubenswrapper[7518]: I0313 12:46:42.350563 7518 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:42.350631 master-0 kubenswrapper[7518]: I0313 12:46:42.350575 7518 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e0bb348a-f72d-462e-aec9-04e4600cc7f0-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:42.350631 master-0 kubenswrapper[7518]: I0313 12:46:42.350587 7518 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:42.350631 master-0 kubenswrapper[7518]: I0313 12:46:42.350598 7518 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:42.350631 master-0 kubenswrapper[7518]: I0313 12:46:42.350609 7518 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:42.350631 master-0 kubenswrapper[7518]: I0313 12:46:42.350620 7518 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e0bb348a-f72d-462e-aec9-04e4600cc7f0-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:42.350631 master-0 kubenswrapper[7518]: I0313 12:46:42.350630 7518 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:42.350631 master-0 kubenswrapper[7518]: I0313 12:46:42.350582 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0bb348a-f72d-462e-aec9-04e4600cc7f0-ready" (OuterVolumeSpecName: "ready") pod "e0bb348a-f72d-462e-aec9-04e4600cc7f0" (UID: "e0bb348a-f72d-462e-aec9-04e4600cc7f0"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:46:42.352682 master-0 kubenswrapper[7518]: I0313 12:46:42.352618 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0bb348a-f72d-462e-aec9-04e4600cc7f0-kube-api-access-4szv8" (OuterVolumeSpecName: "kube-api-access-4szv8") pod "e0bb348a-f72d-462e-aec9-04e4600cc7f0" (UID: "e0bb348a-f72d-462e-aec9-04e4600cc7f0"). InnerVolumeSpecName "kube-api-access-4szv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:46:42.452635 master-0 kubenswrapper[7518]: I0313 12:46:42.452495 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4szv8\" (UniqueName: \"kubernetes.io/projected/e0bb348a-f72d-462e-aec9-04e4600cc7f0-kube-api-access-4szv8\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:42.452635 master-0 kubenswrapper[7518]: I0313 12:46:42.452585 7518 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e0bb348a-f72d-462e-aec9-04e4600cc7f0-ready\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:42.561873 master-0 kubenswrapper[7518]: I0313 12:46:42.561808 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 12:46:42.563261 master-0 kubenswrapper[7518]: I0313 12:46:42.563212 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 12:46:42.564724 master-0 kubenswrapper[7518]: I0313 12:46:42.564690 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 13 12:46:42.565214 master-0 kubenswrapper[7518]: I0313 12:46:42.565183 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 13 12:46:42.567377 master-0 kubenswrapper[7518]: I0313 12:46:42.567325 7518 scope.go:117] "RemoveContainer" containerID="dd5289c2e065c63e076ef785f5c91f68426de016a332635418487df625eabea4" Mar 13 12:46:42.567568 master-0 kubenswrapper[7518]: I0313 12:46:42.567539 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 12:46:42.569121 master-0 kubenswrapper[7518]: I0313 12:46:42.569028 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-zhgx2_e0bb348a-f72d-462e-aec9-04e4600cc7f0/kube-multus-additional-cni-plugins/0.log" Mar 13 12:46:42.569121 master-0 kubenswrapper[7518]: I0313 12:46:42.569075 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" event={"ID":"e0bb348a-f72d-462e-aec9-04e4600cc7f0","Type":"ContainerDied","Data":"31e14db18d25a3fa72927804a1ed2bc494c41afe50ce854c00d3136f6fbe374e"} Mar 13 12:46:42.569254 master-0 kubenswrapper[7518]: I0313 12:46:42.569173 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zhgx2" Mar 13 12:46:42.581943 master-0 kubenswrapper[7518]: I0313 12:46:42.581898 7518 scope.go:117] "RemoveContainer" containerID="440a66755130132d56907a45f85ff201a5b883c75b1e482675b4125de5018dda" Mar 13 12:46:42.606515 master-0 kubenswrapper[7518]: I0313 12:46:42.606479 7518 scope.go:117] "RemoveContainer" containerID="acda021c5f7e7aff55c971e32cec50e25aa40113e66a45d15899959c993261a0" Mar 13 12:46:42.621249 master-0 kubenswrapper[7518]: I0313 12:46:42.621126 7518 scope.go:117] "RemoveContainer" containerID="38c2e3b0f262510b515ee410dfc31f716307a3e0807eb4b5d3d5cc8d3c3c5ced" Mar 13 12:46:42.635177 master-0 kubenswrapper[7518]: I0313 12:46:42.635111 7518 scope.go:117] "RemoveContainer" containerID="1ca6e8ace45a17d5da03ca182f6c2cf352582d5bc5b5d835a63329d11f8a8397" Mar 13 12:46:42.647838 master-0 kubenswrapper[7518]: I0313 12:46:42.647800 7518 scope.go:117] "RemoveContainer" containerID="a24502cdbf57f3af530c16b279d90e04b37d8116542797b27db1c42bb0ece279" Mar 13 12:46:42.662130 master-0 kubenswrapper[7518]: I0313 12:46:42.662083 7518 scope.go:117] "RemoveContainer" containerID="51708dbfd880bb781044065864d488ed11f7e85098ff14393855c88e1ae496df" Mar 13 12:46:42.675346 master-0 kubenswrapper[7518]: I0313 12:46:42.675310 7518 scope.go:117] "RemoveContainer" containerID="19074dc73968560b828f5c5335186658b83f7db1641c16ec73e2170c5bea574e" Mar 13 12:46:42.691213 master-0 kubenswrapper[7518]: I0313 12:46:42.691175 7518 scope.go:117] "RemoveContainer" containerID="c13dc88ae1d13a7186746245d431d38598c0a66d591db0eeff52326c08185d46" Mar 13 12:46:43.598955 master-0 kubenswrapper[7518]: I0313 12:46:43.598875 7518 scope.go:117] "RemoveContainer" containerID="74509294773fbb5f73a8dd8c9003ceebee4b1e194cad14d7465b52eca3b8eaab" Mar 13 12:46:43.606810 master-0 kubenswrapper[7518]: I0313 12:46:43.606769 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" path="/var/lib/kubelet/pods/8e52bef89f4b50e4590a1719bcc5d7e5/volumes" Mar 13 12:46:44.585445 master-0 kubenswrapper[7518]: I0313 12:46:44.585384 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"33dc3b8e25f77fb05b589ec8e3e510dade539a78b8f7492825619e6eaad51fe9"} Mar 13 12:46:45.335078 master-0 kubenswrapper[7518]: E0313 12:46:45.334937 7518 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189c6755c2944495 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:8e52bef89f4b50e4590a1719bcc5d7e5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Killing,Message:Stopping container etcd-metrics,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:46:11.311510677 +0000 UTC m=+525.944579864,LastTimestamp:2026-03-13 12:46:11.311510677 +0000 UTC m=+525.944579864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:46:45.973819 master-0 kubenswrapper[7518]: E0313 12:46:45.973684 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:46:49.597991 master-0 kubenswrapper[7518]: I0313 12:46:49.597895 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 12:46:49.599202 master-0 kubenswrapper[7518]: I0313 12:46:49.599026 7518 scope.go:117] "RemoveContainer" containerID="c81c73f91c343cb4e577286cc40722c4e25a55cd8b94b5a421c1eff5fabb3c61" Mar 13 12:46:49.618075 master-0 kubenswrapper[7518]: I0313 12:46:49.618011 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_8f8543a5-1639-4140-a18d-8b0c96821bae/installer/0.log" Mar 13 12:46:49.618306 master-0 kubenswrapper[7518]: I0313 12:46:49.618118 7518 generic.go:334] "Generic (PLEG): container finished" podID="8f8543a5-1639-4140-a18d-8b0c96821bae" containerID="a813a663a398e05e616fe550c674646a6498ff5442d82cbd7adbf48594546e77" exitCode=1 Mar 13 12:46:49.618306 master-0 kubenswrapper[7518]: I0313 12:46:49.618193 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"8f8543a5-1639-4140-a18d-8b0c96821bae","Type":"ContainerDied","Data":"a813a663a398e05e616fe550c674646a6498ff5442d82cbd7adbf48594546e77"} Mar 13 12:46:49.622385 master-0 kubenswrapper[7518]: I0313 12:46:49.622339 7518 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:46:49.622488 master-0 kubenswrapper[7518]: I0313 12:46:49.622407 7518 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:46:50.426506 master-0 kubenswrapper[7518]: E0313 12:46:50.426427 7518 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io master-0)" Mar 13 12:46:50.627467 master-0 kubenswrapper[7518]: I0313 12:46:50.627380 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"92cedb7a6da80cc95a9c57731260f47e160b1a6914a8d90e1d880c6432b4086f"} Mar 13 12:46:50.928456 master-0 kubenswrapper[7518]: I0313 12:46:50.928402 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_8f8543a5-1639-4140-a18d-8b0c96821bae/installer/0.log" Mar 13 12:46:50.928654 master-0 kubenswrapper[7518]: I0313 12:46:50.928508 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:46:51.078887 master-0 kubenswrapper[7518]: I0313 12:46:51.078799 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f8543a5-1639-4140-a18d-8b0c96821bae-kubelet-dir\") pod \"8f8543a5-1639-4140-a18d-8b0c96821bae\" (UID: \"8f8543a5-1639-4140-a18d-8b0c96821bae\") " Mar 13 12:46:51.079227 master-0 kubenswrapper[7518]: I0313 12:46:51.078895 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f8543a5-1639-4140-a18d-8b0c96821bae-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8f8543a5-1639-4140-a18d-8b0c96821bae" (UID: "8f8543a5-1639-4140-a18d-8b0c96821bae"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:46:51.079227 master-0 kubenswrapper[7518]: I0313 12:46:51.079073 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f8543a5-1639-4140-a18d-8b0c96821bae-kube-api-access\") pod \"8f8543a5-1639-4140-a18d-8b0c96821bae\" (UID: \"8f8543a5-1639-4140-a18d-8b0c96821bae\") " Mar 13 12:46:51.079227 master-0 kubenswrapper[7518]: I0313 12:46:51.079120 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f8543a5-1639-4140-a18d-8b0c96821bae-var-lock\") pod \"8f8543a5-1639-4140-a18d-8b0c96821bae\" (UID: \"8f8543a5-1639-4140-a18d-8b0c96821bae\") " Mar 13 12:46:51.079622 master-0 kubenswrapper[7518]: I0313 12:46:51.079371 7518 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f8543a5-1639-4140-a18d-8b0c96821bae-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:51.079622 master-0 kubenswrapper[7518]: I0313 12:46:51.079426 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f8543a5-1639-4140-a18d-8b0c96821bae-var-lock" (OuterVolumeSpecName: "var-lock") pod "8f8543a5-1639-4140-a18d-8b0c96821bae" (UID: "8f8543a5-1639-4140-a18d-8b0c96821bae"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:46:51.082413 master-0 kubenswrapper[7518]: I0313 12:46:51.082352 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f8543a5-1639-4140-a18d-8b0c96821bae-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8f8543a5-1639-4140-a18d-8b0c96821bae" (UID: "8f8543a5-1639-4140-a18d-8b0c96821bae"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:46:51.180400 master-0 kubenswrapper[7518]: I0313 12:46:51.180334 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f8543a5-1639-4140-a18d-8b0c96821bae-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:51.180400 master-0 kubenswrapper[7518]: I0313 12:46:51.180378 7518 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f8543a5-1639-4140-a18d-8b0c96821bae-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:46:51.634188 master-0 kubenswrapper[7518]: I0313 12:46:51.634126 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_8f8543a5-1639-4140-a18d-8b0c96821bae/installer/0.log" Mar 13 12:46:51.634188 master-0 kubenswrapper[7518]: I0313 12:46:51.634200 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"8f8543a5-1639-4140-a18d-8b0c96821bae","Type":"ContainerDied","Data":"90d62dc62426f86839fab6dfcb69950974991422a3bbd33e6f3fd2c0bd1c8644"} Mar 13 12:46:51.634864 master-0 kubenswrapper[7518]: I0313 12:46:51.634226 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90d62dc62426f86839fab6dfcb69950974991422a3bbd33e6f3fd2c0bd1c8644" Mar 13 12:46:51.634864 master-0 kubenswrapper[7518]: I0313 12:46:51.634291 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:46:54.146555 master-0 kubenswrapper[7518]: I0313 12:46:54.146443 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:46:55.215977 master-0 kubenswrapper[7518]: I0313 12:46:55.215897 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:46:55.974693 master-0 kubenswrapper[7518]: E0313 12:46:55.974622 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:46:57.146703 master-0 kubenswrapper[7518]: I0313 12:46:57.146647 7518 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:00.427425 master-0 kubenswrapper[7518]: E0313 12:47:00.427325 7518 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:05.744395 master-0 kubenswrapper[7518]: I0313 12:47:05.744336 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-qg8q5_1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/approver/1.log" Mar 13 12:47:05.746640 master-0 kubenswrapper[7518]: I0313 12:47:05.746580 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-qg8q5_1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/approver/0.log" Mar 13 12:47:05.747451 master-0 kubenswrapper[7518]: I0313 12:47:05.747367 7518 generic.go:334] "Generic (PLEG): container finished" podID="1f43b4e7-5cd1-46d2-a02e-0d846b2e5182" containerID="b91c079b382f32d02d029d00309dfc5b4425807a136542a6d176792b503d743b" exitCode=1 Mar 13 12:47:05.747451 master-0 kubenswrapper[7518]: I0313 12:47:05.747435 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-qg8q5" event={"ID":"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182","Type":"ContainerDied","Data":"b91c079b382f32d02d029d00309dfc5b4425807a136542a6d176792b503d743b"} Mar 13 12:47:05.747728 master-0 kubenswrapper[7518]: I0313 12:47:05.747494 7518 scope.go:117] "RemoveContainer" containerID="8c3d9fdbcfd0987b6eb3f7869d1d1d034470ad27e956a473bf9fb468daecb5e8" Mar 13 12:47:05.748491 master-0 kubenswrapper[7518]: I0313 12:47:05.748414 7518 scope.go:117] "RemoveContainer" containerID="b91c079b382f32d02d029d00309dfc5b4425807a136542a6d176792b503d743b" Mar 13 12:47:05.748965 master-0 kubenswrapper[7518]: E0313 12:47:05.748896 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-qg8q5_openshift-network-node-identity(1f43b4e7-5cd1-46d2-a02e-0d846b2e5182)\"" pod="openshift-network-node-identity/network-node-identity-qg8q5" podUID="1f43b4e7-5cd1-46d2-a02e-0d846b2e5182" Mar 13 12:47:05.976035 master-0 kubenswrapper[7518]: E0313 12:47:05.975880 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:05.976367 master-0 kubenswrapper[7518]: E0313 12:47:05.976338 7518 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 12:47:06.754909 master-0 kubenswrapper[7518]: I0313 12:47:06.754797 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-qg8q5_1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/approver/1.log" Mar 13 12:47:07.148096 master-0 kubenswrapper[7518]: I0313 12:47:07.147997 7518 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:10.428173 master-0 kubenswrapper[7518]: E0313 12:47:10.428067 7518 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:10.428173 master-0 kubenswrapper[7518]: I0313 12:47:10.428164 7518 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 12:47:11.315783 master-0 kubenswrapper[7518]: I0313 12:47:11.315688 7518 status_manager.go:851] "Failed to get status for pod" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Mar 13 12:47:17.147483 master-0 kubenswrapper[7518]: I0313 12:47:17.147341 7518 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:17.147483 master-0 kubenswrapper[7518]: I0313 12:47:17.147506 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:47:17.148409 master-0 kubenswrapper[7518]: I0313 12:47:17.148145 7518 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"92cedb7a6da80cc95a9c57731260f47e160b1a6914a8d90e1d880c6432b4086f"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 13 12:47:17.148409 master-0 kubenswrapper[7518]: I0313 12:47:17.148303 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://92cedb7a6da80cc95a9c57731260f47e160b1a6914a8d90e1d880c6432b4086f" gracePeriod=30 Mar 13 12:47:17.875789 master-0 kubenswrapper[7518]: I0313 12:47:17.871270 7518 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="92cedb7a6da80cc95a9c57731260f47e160b1a6914a8d90e1d880c6432b4086f" exitCode=2 Mar 13 12:47:17.875789 master-0 kubenswrapper[7518]: I0313 12:47:17.871322 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"92cedb7a6da80cc95a9c57731260f47e160b1a6914a8d90e1d880c6432b4086f"} Mar 13 12:47:17.875789 master-0 kubenswrapper[7518]: I0313 12:47:17.871364 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"db4e89d51ac70265662c9ba63d20dfe2538e991716870f677f78b9fb028c5609"} Mar 13 12:47:17.875789 master-0 kubenswrapper[7518]: I0313 12:47:17.871415 7518 scope.go:117] "RemoveContainer" containerID="c81c73f91c343cb4e577286cc40722c4e25a55cd8b94b5a421c1eff5fabb3c61" Mar 13 12:47:19.338583 master-0 kubenswrapper[7518]: E0313 12:47:19.338378 7518 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189c6755c293fc64 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:8e52bef89f4b50e4590a1719bcc5d7e5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Killing,Message:Stopping container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:46:11.311492196 +0000 UTC m=+525.944561383,LastTimestamp:2026-03-13 12:46:11.311492196 +0000 UTC m=+525.944561383,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:47:20.429557 master-0 kubenswrapper[7518]: E0313 12:47:20.429441 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 13 12:47:20.598760 master-0 kubenswrapper[7518]: I0313 12:47:20.598672 7518 scope.go:117] "RemoveContainer" containerID="b91c079b382f32d02d029d00309dfc5b4425807a136542a6d176792b503d743b" Mar 13 12:47:20.895521 master-0 kubenswrapper[7518]: I0313 12:47:20.895465 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-qg8q5_1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/approver/1.log" Mar 13 12:47:20.896081 master-0 kubenswrapper[7518]: I0313 12:47:20.896030 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-qg8q5" event={"ID":"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182","Type":"ContainerStarted","Data":"9236e5e6a5d98174397eefa77530989c5099d96810d112c07423b8b5d2e253f7"} Mar 13 12:47:23.625281 master-0 kubenswrapper[7518]: E0313 12:47:23.625195 7518 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 12:47:23.626140 master-0 kubenswrapper[7518]: I0313 12:47:23.625699 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 12:47:23.648948 master-0 kubenswrapper[7518]: W0313 12:47:23.648875 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29c709c82970b529e7b9b895aa92ef05.slice/crio-aac9e43b541ff8c2c2bfb86003c0c12881f81493b0818cd60c9ba62d916d93a2 WatchSource:0}: Error finding container aac9e43b541ff8c2c2bfb86003c0c12881f81493b0818cd60c9ba62d916d93a2: Status 404 returned error can't find the container with id aac9e43b541ff8c2c2bfb86003c0c12881f81493b0818cd60c9ba62d916d93a2 Mar 13 12:47:23.914542 master-0 kubenswrapper[7518]: I0313 12:47:23.914399 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"aac9e43b541ff8c2c2bfb86003c0c12881f81493b0818cd60c9ba62d916d93a2"} Mar 13 12:47:24.147159 master-0 kubenswrapper[7518]: I0313 12:47:24.146855 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:47:24.922948 master-0 kubenswrapper[7518]: I0313 12:47:24.922872 7518 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="191d6b42f790fa129a37efd43f7471d2dd1f86d99afc82c180f797e065b49aad" exitCode=0 Mar 13 12:47:24.923510 master-0 kubenswrapper[7518]: I0313 12:47:24.922940 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"191d6b42f790fa129a37efd43f7471d2dd1f86d99afc82c180f797e065b49aad"} Mar 13 12:47:24.923510 master-0 kubenswrapper[7518]: I0313 12:47:24.923351 7518 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:47:24.923510 master-0 kubenswrapper[7518]: I0313 12:47:24.923394 7518 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:47:25.216771 master-0 kubenswrapper[7518]: I0313 12:47:25.216616 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:47:26.196826 master-0 kubenswrapper[7518]: E0313 12:47:26.196455 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:47:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:47:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:47:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:47:16Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:26.959387 master-0 kubenswrapper[7518]: I0313 12:47:26.959344 7518 generic.go:334] "Generic (PLEG): container finished" podID="45925a5e-41ae-4c19-b586-3151c7677612" containerID="1033e2108ac67b4d3f75cb158efc6594f949bbad75576abf1a2d8dbd850e968d" exitCode=0 Mar 13 12:47:26.959642 master-0 kubenswrapper[7518]: I0313 12:47:26.959437 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" event={"ID":"45925a5e-41ae-4c19-b586-3151c7677612","Type":"ContainerDied","Data":"1033e2108ac67b4d3f75cb158efc6594f949bbad75576abf1a2d8dbd850e968d"} Mar 13 12:47:26.959845 master-0 kubenswrapper[7518]: I0313 12:47:26.959802 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" event={"ID":"45925a5e-41ae-4c19-b586-3151c7677612","Type":"ContainerStarted","Data":"c4f835c09db11145ad2a4fe25a302845b3cf71bff631c2bae9c2d15853a5abe8"} Mar 13 12:47:26.959949 master-0 kubenswrapper[7518]: I0313 12:47:26.959842 7518 scope.go:117] "RemoveContainer" containerID="825d71b79346e6c336f0a44e80a86fbf2296a449b4aa734881eff9c8477a662b" Mar 13 12:47:27.147412 master-0 kubenswrapper[7518]: I0313 12:47:27.147257 7518 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:27.539605 master-0 kubenswrapper[7518]: I0313 12:47:27.539515 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:47:27.543901 master-0 kubenswrapper[7518]: I0313 12:47:27.543824 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:27.543901 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:27.543901 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:27.543901 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:27.544259 master-0 kubenswrapper[7518]: I0313 12:47:27.543904 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:28.543906 master-0 kubenswrapper[7518]: I0313 12:47:28.543818 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:28.543906 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:28.543906 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:28.543906 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:28.544698 master-0 kubenswrapper[7518]: I0313 12:47:28.543918 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:29.539718 master-0 kubenswrapper[7518]: I0313 12:47:29.539622 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:47:29.541788 master-0 kubenswrapper[7518]: I0313 12:47:29.541755 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:29.541788 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:29.541788 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:29.541788 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:29.542029 master-0 kubenswrapper[7518]: I0313 12:47:29.541814 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:30.541790 master-0 kubenswrapper[7518]: I0313 12:47:30.541729 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:30.541790 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:30.541790 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:30.541790 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:30.542413 master-0 kubenswrapper[7518]: I0313 12:47:30.541802 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:30.629915 master-0 kubenswrapper[7518]: E0313 12:47:30.629859 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 13 12:47:31.542736 master-0 kubenswrapper[7518]: I0313 12:47:31.542673 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:31.542736 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:31.542736 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:31.542736 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:31.543581 master-0 kubenswrapper[7518]: I0313 12:47:31.542741 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:32.543048 master-0 kubenswrapper[7518]: I0313 12:47:32.542996 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:32.543048 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:32.543048 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:32.543048 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:32.543864 master-0 kubenswrapper[7518]: I0313 12:47:32.543828 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:33.541301 master-0 kubenswrapper[7518]: I0313 12:47:33.541254 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:33.541301 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:33.541301 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:33.541301 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:33.541646 master-0 kubenswrapper[7518]: I0313 12:47:33.541317 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:34.542005 master-0 kubenswrapper[7518]: I0313 12:47:34.541953 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:34.542005 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:34.542005 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:34.542005 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:34.542602 master-0 kubenswrapper[7518]: I0313 12:47:34.542062 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:35.541883 master-0 kubenswrapper[7518]: I0313 12:47:35.541813 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:35.541883 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:35.541883 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:35.541883 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:35.542614 master-0 kubenswrapper[7518]: I0313 12:47:35.541912 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:36.197692 master-0 kubenswrapper[7518]: E0313 12:47:36.197625 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:36.542957 master-0 kubenswrapper[7518]: I0313 12:47:36.542855 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:36.542957 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:36.542957 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:36.542957 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:36.542957 master-0 kubenswrapper[7518]: I0313 12:47:36.542936 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:37.147361 master-0 kubenswrapper[7518]: I0313 12:47:37.147262 7518 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:37.542652 master-0 kubenswrapper[7518]: I0313 12:47:37.542605 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:37.542652 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:37.542652 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:37.542652 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:37.542934 master-0 kubenswrapper[7518]: I0313 12:47:37.542674 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:38.543335 master-0 kubenswrapper[7518]: I0313 12:47:38.543256 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:38.543335 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:38.543335 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:38.543335 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:38.543934 master-0 kubenswrapper[7518]: I0313 12:47:38.543344 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:39.543856 master-0 kubenswrapper[7518]: I0313 12:47:39.543776 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:39.543856 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:39.543856 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:39.543856 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:39.544554 master-0 kubenswrapper[7518]: I0313 12:47:39.543879 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:40.541021 master-0 kubenswrapper[7518]: I0313 12:47:40.540959 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:40.541021 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:40.541021 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:40.541021 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:40.541382 master-0 kubenswrapper[7518]: I0313 12:47:40.541035 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:41.031636 master-0 kubenswrapper[7518]: E0313 12:47:41.031429 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 13 12:47:41.542151 master-0 kubenswrapper[7518]: I0313 12:47:41.542079 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:41.542151 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:41.542151 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:41.542151 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:41.542388 master-0 kubenswrapper[7518]: I0313 12:47:41.542167 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:42.058710 master-0 kubenswrapper[7518]: I0313 12:47:42.058652 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/3.log" Mar 13 12:47:42.059479 master-0 kubenswrapper[7518]: I0313 12:47:42.059299 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/2.log" Mar 13 12:47:42.059650 master-0 kubenswrapper[7518]: I0313 12:47:42.059603 7518 generic.go:334] "Generic (PLEG): container finished" podID="2f79578c-bbfb-4968-893a-730deb4c01f9" containerID="ae4dbec7c141edff956f746a70905658efa772c8e6c87f546534e12c26343588" exitCode=1 Mar 13 12:47:42.059650 master-0 kubenswrapper[7518]: I0313 12:47:42.059643 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" event={"ID":"2f79578c-bbfb-4968-893a-730deb4c01f9","Type":"ContainerDied","Data":"ae4dbec7c141edff956f746a70905658efa772c8e6c87f546534e12c26343588"} Mar 13 12:47:42.059779 master-0 kubenswrapper[7518]: I0313 12:47:42.059678 7518 scope.go:117] "RemoveContainer" containerID="4045dec19d514a7cdc11bc9584aece668967f43e77e3659c49eadc29454d9d85" Mar 13 12:47:42.061707 master-0 kubenswrapper[7518]: I0313 12:47:42.061671 7518 scope.go:117] "RemoveContainer" containerID="ae4dbec7c141edff956f746a70905658efa772c8e6c87f546534e12c26343588" Mar 13 12:47:42.062043 master-0 kubenswrapper[7518]: E0313 12:47:42.062012 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-ckl2j_openshift-ingress-operator(2f79578c-bbfb-4968-893a-730deb4c01f9)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" podUID="2f79578c-bbfb-4968-893a-730deb4c01f9" Mar 13 12:47:42.542579 master-0 kubenswrapper[7518]: I0313 12:47:42.542452 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:42.542579 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:42.542579 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:42.542579 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:42.543127 master-0 kubenswrapper[7518]: I0313 12:47:42.542582 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:43.068132 master-0 kubenswrapper[7518]: I0313 12:47:43.068077 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/3.log" Mar 13 12:47:43.542447 master-0 kubenswrapper[7518]: I0313 12:47:43.542387 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:43.542447 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:43.542447 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:43.542447 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:43.542696 master-0 kubenswrapper[7518]: I0313 12:47:43.542476 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:44.541573 master-0 kubenswrapper[7518]: I0313 12:47:44.541507 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:44.541573 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:44.541573 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:44.541573 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:44.541573 master-0 kubenswrapper[7518]: I0313 12:47:44.541573 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:45.542882 master-0 kubenswrapper[7518]: I0313 12:47:45.542029 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:45.542882 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:45.542882 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:45.542882 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:45.543652 master-0 kubenswrapper[7518]: I0313 12:47:45.542908 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:46.198639 master-0 kubenswrapper[7518]: E0313 12:47:46.198570 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:46.542398 master-0 kubenswrapper[7518]: I0313 12:47:46.542312 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:46.542398 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:46.542398 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:46.542398 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:46.542697 master-0 kubenswrapper[7518]: I0313 12:47:46.542424 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:47.147330 master-0 kubenswrapper[7518]: I0313 12:47:47.147124 7518 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:47.148059 master-0 kubenswrapper[7518]: I0313 12:47:47.147357 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:47:47.148059 master-0 kubenswrapper[7518]: I0313 12:47:47.148000 7518 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"db4e89d51ac70265662c9ba63d20dfe2538e991716870f677f78b9fb028c5609"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 13 12:47:47.148059 master-0 kubenswrapper[7518]: I0313 12:47:47.148052 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://db4e89d51ac70265662c9ba63d20dfe2538e991716870f677f78b9fb028c5609" gracePeriod=30 Mar 13 12:47:47.273758 master-0 kubenswrapper[7518]: E0313 12:47:47.273683 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:47:47.541662 master-0 kubenswrapper[7518]: I0313 12:47:47.541556 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:47.541662 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:47.541662 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:47.541662 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:47.541935 master-0 kubenswrapper[7518]: I0313 12:47:47.541668 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:48.101081 master-0 kubenswrapper[7518]: I0313 12:47:48.101006 7518 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="db4e89d51ac70265662c9ba63d20dfe2538e991716870f677f78b9fb028c5609" exitCode=2 Mar 13 12:47:48.101510 master-0 kubenswrapper[7518]: I0313 12:47:48.101087 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"db4e89d51ac70265662c9ba63d20dfe2538e991716870f677f78b9fb028c5609"} Mar 13 12:47:48.101510 master-0 kubenswrapper[7518]: I0313 12:47:48.101189 7518 scope.go:117] "RemoveContainer" containerID="92cedb7a6da80cc95a9c57731260f47e160b1a6914a8d90e1d880c6432b4086f" Mar 13 12:47:48.102028 master-0 kubenswrapper[7518]: I0313 12:47:48.101983 7518 scope.go:117] "RemoveContainer" containerID="db4e89d51ac70265662c9ba63d20dfe2538e991716870f677f78b9fb028c5609" Mar 13 12:47:48.102454 master-0 kubenswrapper[7518]: E0313 12:47:48.102413 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:47:48.542460 master-0 kubenswrapper[7518]: I0313 12:47:48.542382 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:48.542460 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:48.542460 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:48.542460 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:48.542460 master-0 kubenswrapper[7518]: I0313 12:47:48.542459 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:49.543251 master-0 kubenswrapper[7518]: I0313 12:47:49.543187 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:49.543251 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:49.543251 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:49.543251 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:49.543910 master-0 kubenswrapper[7518]: I0313 12:47:49.543285 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:50.542570 master-0 kubenswrapper[7518]: I0313 12:47:50.542511 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:50.542570 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:50.542570 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:50.542570 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:50.542847 master-0 kubenswrapper[7518]: I0313 12:47:50.542598 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:51.542171 master-0 kubenswrapper[7518]: I0313 12:47:51.542099 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:51.542171 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:51.542171 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:51.542171 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:51.543047 master-0 kubenswrapper[7518]: I0313 12:47:51.542190 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:51.832931 master-0 kubenswrapper[7518]: E0313 12:47:51.832628 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 13 12:47:52.542528 master-0 kubenswrapper[7518]: I0313 12:47:52.542455 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:52.542528 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:52.542528 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:52.542528 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:52.543530 master-0 kubenswrapper[7518]: I0313 12:47:52.542535 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:53.342352 master-0 kubenswrapper[7518]: E0313 12:47:53.342240 7518 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 13 12:47:53.342352 master-0 kubenswrapper[7518]: &Event{ObjectMeta:{router-default-79f8cd6fdd-wtf6j.189c671a053e3704 openshift-ingress 11471 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-79f8cd6fdd-wtf6j,UID:45925a5e-41ae-4c19-b586-3151c7677612,APIVersion:v1,ResourceVersion:10978,FieldPath:spec.containers{router},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Mar 13 12:47:53.342352 master-0 kubenswrapper[7518]: body: [-]backend-http failed: reason withheld Mar 13 12:47:53.342352 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:53.342352 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:53.342352 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:53.342352 master-0 kubenswrapper[7518]: Mar 13 12:47:53.342352 master-0 kubenswrapper[7518]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:41:54 +0000 UTC,LastTimestamp:2026-03-13 12:46:11.541595074 +0000 UTC m=+526.174664271,Count:212,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 13 12:47:53.342352 master-0 kubenswrapper[7518]: > Mar 13 12:47:53.541420 master-0 kubenswrapper[7518]: I0313 12:47:53.541318 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:53.541420 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:53.541420 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:53.541420 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:53.541744 master-0 kubenswrapper[7518]: I0313 12:47:53.541459 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:54.542467 master-0 kubenswrapper[7518]: I0313 12:47:54.542413 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:54.542467 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:54.542467 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:54.542467 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:54.543093 master-0 kubenswrapper[7518]: I0313 12:47:54.542485 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:55.542698 master-0 kubenswrapper[7518]: I0313 12:47:55.542599 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:55.542698 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:55.542698 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:55.542698 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:55.543708 master-0 kubenswrapper[7518]: I0313 12:47:55.542700 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:55.618256 master-0 kubenswrapper[7518]: I0313 12:47:55.599855 7518 scope.go:117] "RemoveContainer" containerID="ae4dbec7c141edff956f746a70905658efa772c8e6c87f546534e12c26343588" Mar 13 12:47:55.618256 master-0 kubenswrapper[7518]: E0313 12:47:55.601242 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-ckl2j_openshift-ingress-operator(2f79578c-bbfb-4968-893a-730deb4c01f9)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" podUID="2f79578c-bbfb-4968-893a-730deb4c01f9" Mar 13 12:47:56.199579 master-0 kubenswrapper[7518]: E0313 12:47:56.199523 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:56.542257 master-0 kubenswrapper[7518]: I0313 12:47:56.542202 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:56.542257 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:56.542257 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:56.542257 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:56.542603 master-0 kubenswrapper[7518]: I0313 12:47:56.542290 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:57.542102 master-0 kubenswrapper[7518]: I0313 12:47:57.542036 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:57.542102 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:57.542102 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:57.542102 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:57.542703 master-0 kubenswrapper[7518]: I0313 12:47:57.542105 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:58.008492 master-0 kubenswrapper[7518]: I0313 12:47:58.008324 7518 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:47:58.009254 master-0 kubenswrapper[7518]: I0313 12:47:58.009212 7518 scope.go:117] "RemoveContainer" containerID="db4e89d51ac70265662c9ba63d20dfe2538e991716870f677f78b9fb028c5609" Mar 13 12:47:58.009668 master-0 kubenswrapper[7518]: E0313 12:47:58.009618 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:47:58.541962 master-0 kubenswrapper[7518]: I0313 12:47:58.541902 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:58.541962 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:58.541962 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:58.541962 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:58.542555 master-0 kubenswrapper[7518]: I0313 12:47:58.541975 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:47:58.926683 master-0 kubenswrapper[7518]: E0313 12:47:58.926502 7518 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 12:47:59.542512 master-0 kubenswrapper[7518]: I0313 12:47:59.542433 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:47:59.542512 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:47:59.542512 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:47:59.542512 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:47:59.543181 master-0 kubenswrapper[7518]: I0313 12:47:59.542532 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:00.193200 master-0 kubenswrapper[7518]: I0313 12:48:00.193116 7518 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="002602ae7257927c6d84d79f7abb72d049dbc2180d8e5879043fea377ec86806" exitCode=0 Mar 13 12:48:00.193200 master-0 kubenswrapper[7518]: I0313 12:48:00.193199 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"002602ae7257927c6d84d79f7abb72d049dbc2180d8e5879043fea377ec86806"} Mar 13 12:48:00.193520 master-0 kubenswrapper[7518]: I0313 12:48:00.193498 7518 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:48:00.193575 master-0 kubenswrapper[7518]: I0313 12:48:00.193522 7518 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:48:00.543403 master-0 kubenswrapper[7518]: I0313 12:48:00.543339 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:00.543403 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:00.543403 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:00.543403 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:00.543403 master-0 kubenswrapper[7518]: I0313 12:48:00.543407 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:01.541947 master-0 kubenswrapper[7518]: I0313 12:48:01.541885 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:01.541947 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:01.541947 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:01.541947 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:01.541947 master-0 kubenswrapper[7518]: I0313 12:48:01.541944 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:02.542763 master-0 kubenswrapper[7518]: I0313 12:48:02.542681 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:02.542763 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:02.542763 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:02.542763 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:02.543805 master-0 kubenswrapper[7518]: I0313 12:48:02.542811 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:03.433674 master-0 kubenswrapper[7518]: E0313 12:48:03.433554 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="3.2s" Mar 13 12:48:03.543719 master-0 kubenswrapper[7518]: I0313 12:48:03.543654 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:03.543719 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:03.543719 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:03.543719 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:03.544324 master-0 kubenswrapper[7518]: I0313 12:48:03.543757 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:04.221764 master-0 kubenswrapper[7518]: I0313 12:48:04.221688 7518 generic.go:334] "Generic (PLEG): container finished" podID="d3d998ee-b26f-4e30-83bc-f94f8c68060a" containerID="de5f0e7cf4aa65e15644e5e3e9b797e70ca19a364211733911306a2f1e0bcffe" exitCode=0 Mar 13 12:48:04.221764 master-0 kubenswrapper[7518]: I0313 12:48:04.221740 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" event={"ID":"d3d998ee-b26f-4e30-83bc-f94f8c68060a","Type":"ContainerDied","Data":"de5f0e7cf4aa65e15644e5e3e9b797e70ca19a364211733911306a2f1e0bcffe"} Mar 13 12:48:04.221764 master-0 kubenswrapper[7518]: I0313 12:48:04.221777 7518 scope.go:117] "RemoveContainer" containerID="2678ae1f026392d01bc32426edbdfbe31df6907392fe5e29e35b3e44ffb8f896" Mar 13 12:48:04.222393 master-0 kubenswrapper[7518]: I0313 12:48:04.222344 7518 scope.go:117] "RemoveContainer" containerID="de5f0e7cf4aa65e15644e5e3e9b797e70ca19a364211733911306a2f1e0bcffe" Mar 13 12:48:04.222748 master-0 kubenswrapper[7518]: E0313 12:48:04.222638 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-64bf9778cb-7qhr4_openshift-marketplace(d3d998ee-b26f-4e30-83bc-f94f8c68060a)\"" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" podUID="d3d998ee-b26f-4e30-83bc-f94f8c68060a" Mar 13 12:48:04.542336 master-0 kubenswrapper[7518]: I0313 12:48:04.542203 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:04.542336 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:04.542336 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:04.542336 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:04.542336 master-0 kubenswrapper[7518]: I0313 12:48:04.542312 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:05.541729 master-0 kubenswrapper[7518]: I0313 12:48:05.541673 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:05.541729 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:05.541729 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:05.541729 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:05.542435 master-0 kubenswrapper[7518]: I0313 12:48:05.541731 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:06.224796 master-0 kubenswrapper[7518]: E0313 12:48:06.224726 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Mar 13 12:48:06.224796 master-0 kubenswrapper[7518]: E0313 12:48:06.224769 7518 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 12:48:06.542348 master-0 kubenswrapper[7518]: I0313 12:48:06.542247 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:06.542348 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:06.542348 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:06.542348 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:06.542348 master-0 kubenswrapper[7518]: I0313 12:48:06.542317 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:07.542740 master-0 kubenswrapper[7518]: I0313 12:48:07.542560 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:07.542740 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:07.542740 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:07.542740 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:07.544187 master-0 kubenswrapper[7518]: I0313 12:48:07.543093 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:07.598830 master-0 kubenswrapper[7518]: I0313 12:48:07.598763 7518 scope.go:117] "RemoveContainer" containerID="ae4dbec7c141edff956f746a70905658efa772c8e6c87f546534e12c26343588" Mar 13 12:48:07.599053 master-0 kubenswrapper[7518]: E0313 12:48:07.599016 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-ckl2j_openshift-ingress-operator(2f79578c-bbfb-4968-893a-730deb4c01f9)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" podUID="2f79578c-bbfb-4968-893a-730deb4c01f9" Mar 13 12:48:08.541507 master-0 kubenswrapper[7518]: I0313 12:48:08.541418 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:08.541507 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:08.541507 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:08.541507 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:08.541841 master-0 kubenswrapper[7518]: I0313 12:48:08.541503 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:08.720924 master-0 kubenswrapper[7518]: I0313 12:48:08.720849 7518 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:48:08.720924 master-0 kubenswrapper[7518]: I0313 12:48:08.720935 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:48:08.721839 master-0 kubenswrapper[7518]: I0313 12:48:08.721796 7518 scope.go:117] "RemoveContainer" containerID="de5f0e7cf4aa65e15644e5e3e9b797e70ca19a364211733911306a2f1e0bcffe" Mar 13 12:48:08.722220 master-0 kubenswrapper[7518]: E0313 12:48:08.722176 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-64bf9778cb-7qhr4_openshift-marketplace(d3d998ee-b26f-4e30-83bc-f94f8c68060a)\"" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" podUID="d3d998ee-b26f-4e30-83bc-f94f8c68060a" Mar 13 12:48:09.541408 master-0 kubenswrapper[7518]: I0313 12:48:09.541308 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:09.541408 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:09.541408 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:09.541408 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:09.541762 master-0 kubenswrapper[7518]: I0313 12:48:09.541449 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:10.541840 master-0 kubenswrapper[7518]: I0313 12:48:10.541766 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:10.541840 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:10.541840 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:10.541840 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:10.542448 master-0 kubenswrapper[7518]: I0313 12:48:10.541844 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:10.599315 master-0 kubenswrapper[7518]: I0313 12:48:10.599245 7518 scope.go:117] "RemoveContainer" containerID="db4e89d51ac70265662c9ba63d20dfe2538e991716870f677f78b9fb028c5609" Mar 13 12:48:10.599891 master-0 kubenswrapper[7518]: E0313 12:48:10.599849 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:48:11.279214 master-0 kubenswrapper[7518]: I0313 12:48:11.279169 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/0.log" Mar 13 12:48:11.279214 master-0 kubenswrapper[7518]: I0313 12:48:11.279214 7518 generic.go:334] "Generic (PLEG): container finished" podID="c642c18f-f960-4418-bcb7-df884f8f8ad5" containerID="5f9a44760abbfd1a103c3cb10f98bd42571ee701936731fde14d2460a8ada811" exitCode=1 Mar 13 12:48:11.279460 master-0 kubenswrapper[7518]: I0313 12:48:11.279261 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" event={"ID":"c642c18f-f960-4418-bcb7-df884f8f8ad5","Type":"ContainerDied","Data":"5f9a44760abbfd1a103c3cb10f98bd42571ee701936731fde14d2460a8ada811"} Mar 13 12:48:11.279705 master-0 kubenswrapper[7518]: I0313 12:48:11.279669 7518 scope.go:117] "RemoveContainer" containerID="5f9a44760abbfd1a103c3cb10f98bd42571ee701936731fde14d2460a8ada811" Mar 13 12:48:11.281454 master-0 kubenswrapper[7518]: I0313 12:48:11.281425 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-8fjzg_00ebdf06-1f44-40cd-87e5-54195188b6d4/manager/0.log" Mar 13 12:48:11.282021 master-0 kubenswrapper[7518]: I0313 12:48:11.281979 7518 generic.go:334] "Generic (PLEG): container finished" podID="00ebdf06-1f44-40cd-87e5-54195188b6d4" containerID="d48ca44a10dd4d84fe59c37cb0e8c494fdafd60a7b5212ea552414db0868ae46" exitCode=1 Mar 13 12:48:11.282155 master-0 kubenswrapper[7518]: I0313 12:48:11.282082 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" event={"ID":"00ebdf06-1f44-40cd-87e5-54195188b6d4","Type":"ContainerDied","Data":"d48ca44a10dd4d84fe59c37cb0e8c494fdafd60a7b5212ea552414db0868ae46"} Mar 13 12:48:11.282790 master-0 kubenswrapper[7518]: I0313 12:48:11.282768 7518 scope.go:117] "RemoveContainer" containerID="d48ca44a10dd4d84fe59c37cb0e8c494fdafd60a7b5212ea552414db0868ae46" Mar 13 12:48:11.283961 master-0 kubenswrapper[7518]: I0313 12:48:11.283933 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-dv8rj_915aabfe-1071-4bfc-b291-424304dfe7d8/manager/0.log" Mar 13 12:48:11.284021 master-0 kubenswrapper[7518]: I0313 12:48:11.283972 7518 generic.go:334] "Generic (PLEG): container finished" podID="915aabfe-1071-4bfc-b291-424304dfe7d8" containerID="ac8d5b7e2908dcba283cf9e9752ebfd8422326f0c9542918621c9dc214262a7d" exitCode=1 Mar 13 12:48:11.284021 master-0 kubenswrapper[7518]: I0313 12:48:11.283996 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" event={"ID":"915aabfe-1071-4bfc-b291-424304dfe7d8","Type":"ContainerDied","Data":"ac8d5b7e2908dcba283cf9e9752ebfd8422326f0c9542918621c9dc214262a7d"} Mar 13 12:48:11.284327 master-0 kubenswrapper[7518]: I0313 12:48:11.284298 7518 scope.go:117] "RemoveContainer" containerID="ac8d5b7e2908dcba283cf9e9752ebfd8422326f0c9542918621c9dc214262a7d" Mar 13 12:48:11.318088 master-0 kubenswrapper[7518]: I0313 12:48:11.318019 7518 status_manager.go:851] "Failed to get status for pod" podUID="8f8543a5-1639-4140-a18d-8b0c96821bae" pod="openshift-kube-scheduler/installer-4-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-4-master-0)" Mar 13 12:48:11.541451 master-0 kubenswrapper[7518]: I0313 12:48:11.541392 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:11.541451 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:11.541451 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:11.541451 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:11.541791 master-0 kubenswrapper[7518]: I0313 12:48:11.541466 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:12.291756 master-0 kubenswrapper[7518]: I0313 12:48:12.291698 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-8fjzg_00ebdf06-1f44-40cd-87e5-54195188b6d4/manager/0.log" Mar 13 12:48:12.292287 master-0 kubenswrapper[7518]: I0313 12:48:12.292176 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" event={"ID":"00ebdf06-1f44-40cd-87e5-54195188b6d4","Type":"ContainerStarted","Data":"f3aebafd727aead1eb89ac21e23de8f8d91824068ecf62219dd8bec8dae8514d"} Mar 13 12:48:12.292697 master-0 kubenswrapper[7518]: I0313 12:48:12.292651 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:48:12.296813 master-0 kubenswrapper[7518]: I0313 12:48:12.296778 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-dv8rj_915aabfe-1071-4bfc-b291-424304dfe7d8/manager/0.log" Mar 13 12:48:12.297422 master-0 kubenswrapper[7518]: I0313 12:48:12.297353 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" event={"ID":"915aabfe-1071-4bfc-b291-424304dfe7d8","Type":"ContainerStarted","Data":"cebe49fac859ceb9501c03daba82da83ebfaf336cbc3c57a0732778a49a9f83d"} Mar 13 12:48:12.297836 master-0 kubenswrapper[7518]: I0313 12:48:12.297799 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:48:12.299747 master-0 kubenswrapper[7518]: I0313 12:48:12.299728 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/0.log" Mar 13 12:48:12.299809 master-0 kubenswrapper[7518]: I0313 12:48:12.299780 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" event={"ID":"c642c18f-f960-4418-bcb7-df884f8f8ad5","Type":"ContainerStarted","Data":"870f37c47c4c47867bd607dfc7f5e2b18321f63b5705ab51a073513178f4a93d"} Mar 13 12:48:12.542731 master-0 kubenswrapper[7518]: I0313 12:48:12.542525 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:12.542731 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:12.542731 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:12.542731 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:12.542731 master-0 kubenswrapper[7518]: I0313 12:48:12.542679 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:13.541902 master-0 kubenswrapper[7518]: I0313 12:48:13.541837 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:13.541902 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:13.541902 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:13.541902 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:13.542446 master-0 kubenswrapper[7518]: I0313 12:48:13.541922 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:14.542766 master-0 kubenswrapper[7518]: I0313 12:48:14.542684 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:14.542766 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:14.542766 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:14.542766 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:14.542766 master-0 kubenswrapper[7518]: I0313 12:48:14.542747 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:15.542273 master-0 kubenswrapper[7518]: I0313 12:48:15.542191 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:15.542273 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:15.542273 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:15.542273 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:15.542624 master-0 kubenswrapper[7518]: I0313 12:48:15.542302 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:16.542763 master-0 kubenswrapper[7518]: I0313 12:48:16.542656 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:16.542763 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:16.542763 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:16.542763 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:16.543972 master-0 kubenswrapper[7518]: I0313 12:48:16.542764 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:16.635982 master-0 kubenswrapper[7518]: E0313 12:48:16.635827 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="6.4s" Mar 13 12:48:17.541490 master-0 kubenswrapper[7518]: I0313 12:48:17.541425 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:17.541490 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:17.541490 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:17.541490 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:17.541814 master-0 kubenswrapper[7518]: I0313 12:48:17.541499 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:17.994496 master-0 kubenswrapper[7518]: I0313 12:48:17.994367 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:48:18.073792 master-0 kubenswrapper[7518]: I0313 12:48:18.073714 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:48:18.541778 master-0 kubenswrapper[7518]: I0313 12:48:18.541691 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:18.541778 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:18.541778 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:18.541778 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:18.542172 master-0 kubenswrapper[7518]: I0313 12:48:18.541822 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:19.542221 master-0 kubenswrapper[7518]: I0313 12:48:19.542166 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:19.542221 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:19.542221 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:19.542221 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:19.542780 master-0 kubenswrapper[7518]: I0313 12:48:19.542224 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:19.598401 master-0 kubenswrapper[7518]: I0313 12:48:19.598356 7518 scope.go:117] "RemoveContainer" containerID="ae4dbec7c141edff956f746a70905658efa772c8e6c87f546534e12c26343588" Mar 13 12:48:19.598666 master-0 kubenswrapper[7518]: E0313 12:48:19.598639 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-ckl2j_openshift-ingress-operator(2f79578c-bbfb-4968-893a-730deb4c01f9)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" podUID="2f79578c-bbfb-4968-893a-730deb4c01f9" Mar 13 12:48:20.357460 master-0 kubenswrapper[7518]: I0313 12:48:20.357401 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg_00d8a21b-701c-4334-9dda-34c28b417f42/config-sync-controllers/0.log" Mar 13 12:48:20.358080 master-0 kubenswrapper[7518]: I0313 12:48:20.358023 7518 generic.go:334] "Generic (PLEG): container finished" podID="00d8a21b-701c-4334-9dda-34c28b417f42" containerID="f7bdd6f14cd7d876f03cc0e565ef27ecd2cd6f1309a345b7b4c1e4b2f6e38eb4" exitCode=1 Mar 13 12:48:20.358196 master-0 kubenswrapper[7518]: I0313 12:48:20.358082 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" event={"ID":"00d8a21b-701c-4334-9dda-34c28b417f42","Type":"ContainerDied","Data":"f7bdd6f14cd7d876f03cc0e565ef27ecd2cd6f1309a345b7b4c1e4b2f6e38eb4"} Mar 13 12:48:20.358801 master-0 kubenswrapper[7518]: I0313 12:48:20.358765 7518 scope.go:117] "RemoveContainer" containerID="f7bdd6f14cd7d876f03cc0e565ef27ecd2cd6f1309a345b7b4c1e4b2f6e38eb4" Mar 13 12:48:20.541443 master-0 kubenswrapper[7518]: I0313 12:48:20.541401 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:20.541443 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:20.541443 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:20.541443 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:20.541690 master-0 kubenswrapper[7518]: I0313 12:48:20.541459 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:20.598561 master-0 kubenswrapper[7518]: I0313 12:48:20.598509 7518 scope.go:117] "RemoveContainer" containerID="de5f0e7cf4aa65e15644e5e3e9b797e70ca19a364211733911306a2f1e0bcffe" Mar 13 12:48:21.369941 master-0 kubenswrapper[7518]: I0313 12:48:21.369899 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" event={"ID":"d3d998ee-b26f-4e30-83bc-f94f8c68060a","Type":"ContainerStarted","Data":"78c4790a37db0aecc4528ed74c601ad2541925485752ba537b335f51dc20d5c5"} Mar 13 12:48:21.370727 master-0 kubenswrapper[7518]: I0313 12:48:21.370662 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:48:21.372756 master-0 kubenswrapper[7518]: I0313 12:48:21.372719 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg_00d8a21b-701c-4334-9dda-34c28b417f42/config-sync-controllers/0.log" Mar 13 12:48:21.373124 master-0 kubenswrapper[7518]: I0313 12:48:21.373099 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" event={"ID":"00d8a21b-701c-4334-9dda-34c28b417f42","Type":"ContainerStarted","Data":"fa054451f3191ee51898c18a30f606a59d02b585bd4bcdaabc18faa55201d2bb"} Mar 13 12:48:21.376070 master-0 kubenswrapper[7518]: I0313 12:48:21.376038 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:48:21.541872 master-0 kubenswrapper[7518]: I0313 12:48:21.541799 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:21.541872 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:21.541872 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:21.541872 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:21.542377 master-0 kubenswrapper[7518]: I0313 12:48:21.541897 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:22.542057 master-0 kubenswrapper[7518]: I0313 12:48:22.541998 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:22.542057 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:22.542057 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:22.542057 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:22.542642 master-0 kubenswrapper[7518]: I0313 12:48:22.542060 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:23.541871 master-0 kubenswrapper[7518]: I0313 12:48:23.541811 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:23.541871 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:23.541871 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:23.541871 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:23.542598 master-0 kubenswrapper[7518]: I0313 12:48:23.541894 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:24.399759 master-0 kubenswrapper[7518]: I0313 12:48:24.399707 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg_00d8a21b-701c-4334-9dda-34c28b417f42/config-sync-controllers/0.log" Mar 13 12:48:24.400416 master-0 kubenswrapper[7518]: I0313 12:48:24.400392 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg_00d8a21b-701c-4334-9dda-34c28b417f42/cluster-cloud-controller-manager/0.log" Mar 13 12:48:24.400503 master-0 kubenswrapper[7518]: I0313 12:48:24.400443 7518 generic.go:334] "Generic (PLEG): container finished" podID="00d8a21b-701c-4334-9dda-34c28b417f42" containerID="fb3e994e087a482374a8017dea545f1ddec09a849b0d0cb7b635b7b86e084f9a" exitCode=1 Mar 13 12:48:24.400540 master-0 kubenswrapper[7518]: I0313 12:48:24.400505 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" event={"ID":"00d8a21b-701c-4334-9dda-34c28b417f42","Type":"ContainerDied","Data":"fb3e994e087a482374a8017dea545f1ddec09a849b0d0cb7b635b7b86e084f9a"} Mar 13 12:48:24.400971 master-0 kubenswrapper[7518]: I0313 12:48:24.400947 7518 scope.go:117] "RemoveContainer" containerID="fb3e994e087a482374a8017dea545f1ddec09a849b0d0cb7b635b7b86e084f9a" Mar 13 12:48:24.541477 master-0 kubenswrapper[7518]: I0313 12:48:24.541427 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:24.541477 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:24.541477 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:24.541477 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:24.541477 master-0 kubenswrapper[7518]: I0313 12:48:24.541483 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:25.411040 master-0 kubenswrapper[7518]: I0313 12:48:25.410994 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg_00d8a21b-701c-4334-9dda-34c28b417f42/config-sync-controllers/0.log" Mar 13 12:48:25.411892 master-0 kubenswrapper[7518]: I0313 12:48:25.411478 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg_00d8a21b-701c-4334-9dda-34c28b417f42/cluster-cloud-controller-manager/0.log" Mar 13 12:48:25.411892 master-0 kubenswrapper[7518]: I0313 12:48:25.411534 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" event={"ID":"00d8a21b-701c-4334-9dda-34c28b417f42","Type":"ContainerStarted","Data":"78df9962c599754b764bb38cfa90e1d261be72d4afb4a62d4d1ad9cbaa09f911"} Mar 13 12:48:25.541573 master-0 kubenswrapper[7518]: I0313 12:48:25.541514 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:25.541573 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:25.541573 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:25.541573 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:25.541573 master-0 kubenswrapper[7518]: I0313 12:48:25.541575 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:25.597930 master-0 kubenswrapper[7518]: I0313 12:48:25.597874 7518 scope.go:117] "RemoveContainer" containerID="db4e89d51ac70265662c9ba63d20dfe2538e991716870f677f78b9fb028c5609" Mar 13 12:48:25.598273 master-0 kubenswrapper[7518]: E0313 12:48:25.598127 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:48:26.446123 master-0 kubenswrapper[7518]: E0313 12:48:26.445752 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:48:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:48:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:48:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:48:16Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:48:26.542562 master-0 kubenswrapper[7518]: I0313 12:48:26.542482 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:26.542562 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:26.542562 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:26.542562 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:26.542562 master-0 kubenswrapper[7518]: I0313 12:48:26.542546 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:27.347047 master-0 kubenswrapper[7518]: E0313 12:48:27.346861 7518 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cni-sysctl-allowlist-ds-zhgx2.189c67576cf0309f openshift-multus 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-multus,Name:cni-sysctl-allowlist-ds-zhgx2,UID:e0bb348a-f72d-462e-aec9-04e4600cc7f0,APIVersion:v1,ResourceVersion:12403,FieldPath:spec.containers{kube-multus-additional-cni-plugins},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:46:18.464628895 +0000 UTC m=+533.097698082,LastTimestamp:2026-03-13 12:46:18.464628895 +0000 UTC m=+533.097698082,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:48:27.542668 master-0 kubenswrapper[7518]: I0313 12:48:27.542568 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:27.542668 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:27.542668 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:27.542668 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:27.542668 master-0 kubenswrapper[7518]: I0313 12:48:27.542644 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:28.542700 master-0 kubenswrapper[7518]: I0313 12:48:28.542602 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:28.542700 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:28.542700 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:28.542700 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:28.543674 master-0 kubenswrapper[7518]: I0313 12:48:28.542727 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:29.542879 master-0 kubenswrapper[7518]: I0313 12:48:29.542754 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:29.542879 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:29.542879 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:29.542879 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:29.543527 master-0 kubenswrapper[7518]: I0313 12:48:29.543089 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:30.542293 master-0 kubenswrapper[7518]: I0313 12:48:30.542230 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:30.542293 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:30.542293 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:30.542293 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:30.542584 master-0 kubenswrapper[7518]: I0313 12:48:30.542299 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:30.597872 master-0 kubenswrapper[7518]: I0313 12:48:30.597809 7518 scope.go:117] "RemoveContainer" containerID="ae4dbec7c141edff956f746a70905658efa772c8e6c87f546534e12c26343588" Mar 13 12:48:31.457174 master-0 kubenswrapper[7518]: I0313 12:48:31.457058 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/3.log" Mar 13 12:48:31.458007 master-0 kubenswrapper[7518]: I0313 12:48:31.457921 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" event={"ID":"2f79578c-bbfb-4968-893a-730deb4c01f9","Type":"ContainerStarted","Data":"25a4898dab96b21910d2f9f74a6d0f38ac67afd0471454539094f0cdc130c4f5"} Mar 13 12:48:31.542749 master-0 kubenswrapper[7518]: I0313 12:48:31.542663 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:31.542749 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:31.542749 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:31.542749 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:31.543010 master-0 kubenswrapper[7518]: I0313 12:48:31.542763 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:32.542436 master-0 kubenswrapper[7518]: I0313 12:48:32.542300 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:32.542436 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:32.542436 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:32.542436 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:32.543643 master-0 kubenswrapper[7518]: I0313 12:48:32.542472 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:33.037417 master-0 kubenswrapper[7518]: E0313 12:48:33.037273 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:48:33.477849 master-0 kubenswrapper[7518]: I0313 12:48:33.477727 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-btz8w_747659a6-4a1e-43ed-bb8e-36da6e63b5a1/control-plane-machine-set-operator/0.log" Mar 13 12:48:33.477849 master-0 kubenswrapper[7518]: I0313 12:48:33.477786 7518 generic.go:334] "Generic (PLEG): container finished" podID="747659a6-4a1e-43ed-bb8e-36da6e63b5a1" containerID="fa510582aea2f9e7beb06130b537cab1524760c3e6ed427ab1be5150bea793b0" exitCode=1 Mar 13 12:48:33.478092 master-0 kubenswrapper[7518]: I0313 12:48:33.477862 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" event={"ID":"747659a6-4a1e-43ed-bb8e-36da6e63b5a1","Type":"ContainerDied","Data":"fa510582aea2f9e7beb06130b537cab1524760c3e6ed427ab1be5150bea793b0"} Mar 13 12:48:33.478759 master-0 kubenswrapper[7518]: I0313 12:48:33.478695 7518 scope.go:117] "RemoveContainer" containerID="fa510582aea2f9e7beb06130b537cab1524760c3e6ed427ab1be5150bea793b0" Mar 13 12:48:33.541972 master-0 kubenswrapper[7518]: I0313 12:48:33.541904 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:33.541972 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:33.541972 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:33.541972 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:33.542286 master-0 kubenswrapper[7518]: I0313 12:48:33.542013 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:34.196232 master-0 kubenswrapper[7518]: E0313 12:48:34.196112 7518 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 12:48:34.487165 master-0 kubenswrapper[7518]: I0313 12:48:34.487097 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-btz8w_747659a6-4a1e-43ed-bb8e-36da6e63b5a1/control-plane-machine-set-operator/0.log" Mar 13 12:48:34.487328 master-0 kubenswrapper[7518]: I0313 12:48:34.487202 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" event={"ID":"747659a6-4a1e-43ed-bb8e-36da6e63b5a1","Type":"ContainerStarted","Data":"d8e0c04a4e47c5cb07b8d42b3a685f8648dcc5c7626c7616fad32eec141da684"} Mar 13 12:48:34.489328 master-0 kubenswrapper[7518]: I0313 12:48:34.489291 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"7a3b4c6b1768e8d5ad64ec3d49b0ef5a758c7b08b68da0b9f9604043050a5df9"} Mar 13 12:48:34.489678 master-0 kubenswrapper[7518]: I0313 12:48:34.489643 7518 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:48:34.489740 master-0 kubenswrapper[7518]: I0313 12:48:34.489685 7518 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:48:34.542285 master-0 kubenswrapper[7518]: I0313 12:48:34.542220 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:34.542285 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:34.542285 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:34.542285 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:34.542579 master-0 kubenswrapper[7518]: I0313 12:48:34.542331 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:35.497428 master-0 kubenswrapper[7518]: I0313 12:48:35.497393 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-l6jp5_317af639-269e-4163-8e24-fcea468b9352/cluster-baremetal-operator/0.log" Mar 13 12:48:35.498261 master-0 kubenswrapper[7518]: I0313 12:48:35.498230 7518 generic.go:334] "Generic (PLEG): container finished" podID="317af639-269e-4163-8e24-fcea468b9352" containerID="31592103bc0b8de889024ea6d6f7d7d81a7a97c8aa34c21b276d7003e983eaa5" exitCode=1 Mar 13 12:48:35.498429 master-0 kubenswrapper[7518]: I0313 12:48:35.498317 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" event={"ID":"317af639-269e-4163-8e24-fcea468b9352","Type":"ContainerDied","Data":"31592103bc0b8de889024ea6d6f7d7d81a7a97c8aa34c21b276d7003e983eaa5"} Mar 13 12:48:35.500106 master-0 kubenswrapper[7518]: I0313 12:48:35.500087 7518 scope.go:117] "RemoveContainer" containerID="31592103bc0b8de889024ea6d6f7d7d81a7a97c8aa34c21b276d7003e983eaa5" Mar 13 12:48:35.500984 master-0 kubenswrapper[7518]: I0313 12:48:35.500942 7518 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="7a3b4c6b1768e8d5ad64ec3d49b0ef5a758c7b08b68da0b9f9604043050a5df9" exitCode=0 Mar 13 12:48:35.501084 master-0 kubenswrapper[7518]: I0313 12:48:35.500989 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"7a3b4c6b1768e8d5ad64ec3d49b0ef5a758c7b08b68da0b9f9604043050a5df9"} Mar 13 12:48:35.542051 master-0 kubenswrapper[7518]: I0313 12:48:35.541973 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:35.542051 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:35.542051 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:35.542051 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:35.542452 master-0 kubenswrapper[7518]: I0313 12:48:35.542064 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:36.446946 master-0 kubenswrapper[7518]: E0313 12:48:36.446486 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:48:36.511232 master-0 kubenswrapper[7518]: I0313 12:48:36.511125 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-l6jp5_317af639-269e-4163-8e24-fcea468b9352/cluster-baremetal-operator/0.log" Mar 13 12:48:36.511727 master-0 kubenswrapper[7518]: I0313 12:48:36.511246 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" event={"ID":"317af639-269e-4163-8e24-fcea468b9352","Type":"ContainerStarted","Data":"95a6bd22fb6c0c4b1137634707e1ef04230dc603ccfcb3a303f17aa2b6d154e3"} Mar 13 12:48:36.541570 master-0 kubenswrapper[7518]: I0313 12:48:36.541498 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:36.541570 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:36.541570 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:36.541570 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:36.541570 master-0 kubenswrapper[7518]: I0313 12:48:36.541562 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:37.542239 master-0 kubenswrapper[7518]: I0313 12:48:37.542163 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:37.542239 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:37.542239 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:37.542239 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:37.543254 master-0 kubenswrapper[7518]: I0313 12:48:37.542247 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:38.542725 master-0 kubenswrapper[7518]: I0313 12:48:38.542650 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:38.542725 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:38.542725 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:38.542725 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:38.543442 master-0 kubenswrapper[7518]: I0313 12:48:38.542743 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:39.546813 master-0 kubenswrapper[7518]: I0313 12:48:39.546731 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:39.546813 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:39.546813 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:39.546813 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:39.547573 master-0 kubenswrapper[7518]: I0313 12:48:39.546849 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:39.599565 master-0 kubenswrapper[7518]: I0313 12:48:39.599468 7518 scope.go:117] "RemoveContainer" containerID="db4e89d51ac70265662c9ba63d20dfe2538e991716870f677f78b9fb028c5609" Mar 13 12:48:39.600044 master-0 kubenswrapper[7518]: E0313 12:48:39.599994 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:48:40.542363 master-0 kubenswrapper[7518]: I0313 12:48:40.542279 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:40.542363 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:40.542363 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:40.542363 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:40.542717 master-0 kubenswrapper[7518]: I0313 12:48:40.542386 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:41.541472 master-0 kubenswrapper[7518]: I0313 12:48:41.541418 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:41.541472 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:41.541472 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:41.541472 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:41.542020 master-0 kubenswrapper[7518]: I0313 12:48:41.541491 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:41.547695 master-0 kubenswrapper[7518]: I0313 12:48:41.547653 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/1.log" Mar 13 12:48:41.548226 master-0 kubenswrapper[7518]: I0313 12:48:41.548126 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/0.log" Mar 13 12:48:41.548365 master-0 kubenswrapper[7518]: I0313 12:48:41.548230 7518 generic.go:334] "Generic (PLEG): container finished" podID="c642c18f-f960-4418-bcb7-df884f8f8ad5" containerID="870f37c47c4c47867bd607dfc7f5e2b18321f63b5705ab51a073513178f4a93d" exitCode=1 Mar 13 12:48:41.548365 master-0 kubenswrapper[7518]: I0313 12:48:41.548264 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" event={"ID":"c642c18f-f960-4418-bcb7-df884f8f8ad5","Type":"ContainerDied","Data":"870f37c47c4c47867bd607dfc7f5e2b18321f63b5705ab51a073513178f4a93d"} Mar 13 12:48:41.548365 master-0 kubenswrapper[7518]: I0313 12:48:41.548301 7518 scope.go:117] "RemoveContainer" containerID="5f9a44760abbfd1a103c3cb10f98bd42571ee701936731fde14d2460a8ada811" Mar 13 12:48:41.548800 master-0 kubenswrapper[7518]: I0313 12:48:41.548772 7518 scope.go:117] "RemoveContainer" containerID="870f37c47c4c47867bd607dfc7f5e2b18321f63b5705ab51a073513178f4a93d" Mar 13 12:48:41.549001 master-0 kubenswrapper[7518]: E0313 12:48:41.548971 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-pjpn2_openshift-cluster-storage-operator(c642c18f-f960-4418-bcb7-df884f8f8ad5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" podUID="c642c18f-f960-4418-bcb7-df884f8f8ad5" Mar 13 12:48:42.542704 master-0 kubenswrapper[7518]: I0313 12:48:42.542656 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:42.542704 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:42.542704 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:42.542704 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:42.543381 master-0 kubenswrapper[7518]: I0313 12:48:42.542722 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:42.558041 master-0 kubenswrapper[7518]: I0313 12:48:42.558003 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/1.log" Mar 13 12:48:43.541370 master-0 kubenswrapper[7518]: I0313 12:48:43.541319 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:43.541370 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:43.541370 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:43.541370 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:43.541691 master-0 kubenswrapper[7518]: I0313 12:48:43.541398 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:44.542556 master-0 kubenswrapper[7518]: I0313 12:48:44.542457 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:44.542556 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:44.542556 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:44.542556 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:44.543058 master-0 kubenswrapper[7518]: I0313 12:48:44.542600 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:45.542966 master-0 kubenswrapper[7518]: I0313 12:48:45.542880 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:45.542966 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:45.542966 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:45.542966 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:45.544176 master-0 kubenswrapper[7518]: I0313 12:48:45.542980 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:45.578438 master-0 kubenswrapper[7518]: I0313 12:48:45.578380 7518 generic.go:334] "Generic (PLEG): container finished" podID="5ae41cff-0949-47f8-aae9-ae133191476d" containerID="2a4481a18e7aed734ae4a2d67eeeb008d6aeba24bc7223a49b0d6a3791cd0e5c" exitCode=0 Mar 13 12:48:45.578685 master-0 kubenswrapper[7518]: I0313 12:48:45.578478 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" event={"ID":"5ae41cff-0949-47f8-aae9-ae133191476d","Type":"ContainerDied","Data":"2a4481a18e7aed734ae4a2d67eeeb008d6aeba24bc7223a49b0d6a3791cd0e5c"} Mar 13 12:48:45.579374 master-0 kubenswrapper[7518]: I0313 12:48:45.579345 7518 scope.go:117] "RemoveContainer" containerID="2a4481a18e7aed734ae4a2d67eeeb008d6aeba24bc7223a49b0d6a3791cd0e5c" Mar 13 12:48:45.580919 master-0 kubenswrapper[7518]: I0313 12:48:45.580882 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-cwl2p_b12a6f33-70df-4832-ac3b-0d2b94125fbf/machine-approver-controller/0.log" Mar 13 12:48:45.581411 master-0 kubenswrapper[7518]: I0313 12:48:45.581371 7518 generic.go:334] "Generic (PLEG): container finished" podID="b12a6f33-70df-4832-ac3b-0d2b94125fbf" containerID="bf350ea0de070f0fd26919325b63ec00154a2596f691d915b23dc9183ce79b89" exitCode=255 Mar 13 12:48:45.581411 master-0 kubenswrapper[7518]: I0313 12:48:45.581404 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" event={"ID":"b12a6f33-70df-4832-ac3b-0d2b94125fbf","Type":"ContainerDied","Data":"bf350ea0de070f0fd26919325b63ec00154a2596f691d915b23dc9183ce79b89"} Mar 13 12:48:45.581814 master-0 kubenswrapper[7518]: I0313 12:48:45.581777 7518 scope.go:117] "RemoveContainer" containerID="bf350ea0de070f0fd26919325b63ec00154a2596f691d915b23dc9183ce79b89" Mar 13 12:48:46.447211 master-0 kubenswrapper[7518]: E0313 12:48:46.447162 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:48:46.573155 master-0 kubenswrapper[7518]: I0313 12:48:46.573053 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:46.573155 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:46.573155 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:46.573155 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:46.573696 master-0 kubenswrapper[7518]: I0313 12:48:46.573169 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:46.590541 master-0 kubenswrapper[7518]: I0313 12:48:46.590502 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-cwl2p_b12a6f33-70df-4832-ac3b-0d2b94125fbf/machine-approver-controller/0.log" Mar 13 12:48:46.590962 master-0 kubenswrapper[7518]: I0313 12:48:46.590918 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" event={"ID":"b12a6f33-70df-4832-ac3b-0d2b94125fbf","Type":"ContainerStarted","Data":"ec1158c9d676ef1ac8ec1c9b7124bb871c4d5882c3ea6b0e56accd0e867afdc9"} Mar 13 12:48:46.595675 master-0 kubenswrapper[7518]: I0313 12:48:46.595606 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" event={"ID":"5ae41cff-0949-47f8-aae9-ae133191476d","Type":"ContainerStarted","Data":"73ea253081d2cdb625cd3d1a9ecfc20e8bab93f5070fa553a22b992fb346c21a"} Mar 13 12:48:47.542471 master-0 kubenswrapper[7518]: I0313 12:48:47.542388 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:47.542471 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:47.542471 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:47.542471 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:47.542869 master-0 kubenswrapper[7518]: I0313 12:48:47.542484 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:47.609902 master-0 kubenswrapper[7518]: I0313 12:48:47.609835 7518 generic.go:334] "Generic (PLEG): container finished" podID="a454234a-6c8e-4916-81e8-c9e66cec9d31" containerID="f12fef74127c1c2b2f8ceb210e754cc92619ab36c1f145fe9d244f8d84cfb88c" exitCode=0 Mar 13 12:48:47.609902 master-0 kubenswrapper[7518]: I0313 12:48:47.609893 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" event={"ID":"a454234a-6c8e-4916-81e8-c9e66cec9d31","Type":"ContainerDied","Data":"f12fef74127c1c2b2f8ceb210e754cc92619ab36c1f145fe9d244f8d84cfb88c"} Mar 13 12:48:47.610457 master-0 kubenswrapper[7518]: I0313 12:48:47.610424 7518 scope.go:117] "RemoveContainer" containerID="f12fef74127c1c2b2f8ceb210e754cc92619ab36c1f145fe9d244f8d84cfb88c" Mar 13 12:48:48.541771 master-0 kubenswrapper[7518]: I0313 12:48:48.541668 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:48.541771 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:48.541771 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:48.541771 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:48.541771 master-0 kubenswrapper[7518]: I0313 12:48:48.541741 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:48.628050 master-0 kubenswrapper[7518]: I0313 12:48:48.628001 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" event={"ID":"a454234a-6c8e-4916-81e8-c9e66cec9d31","Type":"ContainerStarted","Data":"338937b0ebb757bdee738361c73af8d323aeef4fa0eb7edfc9e3a14cb3dcc3f8"} Mar 13 12:48:48.628590 master-0 kubenswrapper[7518]: I0313 12:48:48.628342 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:48:48.634537 master-0 kubenswrapper[7518]: I0313 12:48:48.634484 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:48:49.541831 master-0 kubenswrapper[7518]: I0313 12:48:49.541725 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:49.541831 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:49.541831 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:49.541831 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:49.542445 master-0 kubenswrapper[7518]: I0313 12:48:49.541841 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:49.639999 master-0 kubenswrapper[7518]: I0313 12:48:49.639965 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-669qk_3d653e1a-5903-4a02-9357-df145f028c0d/package-server-manager/0.log" Mar 13 12:48:49.641082 master-0 kubenswrapper[7518]: I0313 12:48:49.641030 7518 generic.go:334] "Generic (PLEG): container finished" podID="3d653e1a-5903-4a02-9357-df145f028c0d" containerID="baf23d87752ea57aa0879a0f3cabb3d54da65ab6c1d69c34a044b8dc1883ed70" exitCode=1 Mar 13 12:48:49.641369 master-0 kubenswrapper[7518]: I0313 12:48:49.641319 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" event={"ID":"3d653e1a-5903-4a02-9357-df145f028c0d","Type":"ContainerDied","Data":"baf23d87752ea57aa0879a0f3cabb3d54da65ab6c1d69c34a044b8dc1883ed70"} Mar 13 12:48:49.642515 master-0 kubenswrapper[7518]: I0313 12:48:49.642482 7518 scope.go:117] "RemoveContainer" containerID="baf23d87752ea57aa0879a0f3cabb3d54da65ab6c1d69c34a044b8dc1883ed70" Mar 13 12:48:50.038541 master-0 kubenswrapper[7518]: E0313 12:48:50.038450 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:48:50.542460 master-0 kubenswrapper[7518]: I0313 12:48:50.542389 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:50.542460 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:50.542460 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:50.542460 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:50.542737 master-0 kubenswrapper[7518]: I0313 12:48:50.542479 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:50.650750 master-0 kubenswrapper[7518]: I0313 12:48:50.650689 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-669qk_3d653e1a-5903-4a02-9357-df145f028c0d/package-server-manager/0.log" Mar 13 12:48:50.651453 master-0 kubenswrapper[7518]: I0313 12:48:50.651242 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" event={"ID":"3d653e1a-5903-4a02-9357-df145f028c0d","Type":"ContainerStarted","Data":"a3d772c8ccc797a519e70556f1d3e8c962c4b17bd23c03c38914ec1c221f13ed"} Mar 13 12:48:50.651561 master-0 kubenswrapper[7518]: I0313 12:48:50.651516 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:48:51.542580 master-0 kubenswrapper[7518]: I0313 12:48:51.542507 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:51.542580 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:51.542580 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:51.542580 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:51.542870 master-0 kubenswrapper[7518]: I0313 12:48:51.542607 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:51.598877 master-0 kubenswrapper[7518]: I0313 12:48:51.598766 7518 scope.go:117] "RemoveContainer" containerID="db4e89d51ac70265662c9ba63d20dfe2538e991716870f677f78b9fb028c5609" Mar 13 12:48:51.599408 master-0 kubenswrapper[7518]: E0313 12:48:51.599337 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:48:52.542050 master-0 kubenswrapper[7518]: I0313 12:48:52.541959 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:52.542050 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:52.542050 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:52.542050 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:52.542050 master-0 kubenswrapper[7518]: I0313 12:48:52.542020 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:53.543650 master-0 kubenswrapper[7518]: I0313 12:48:53.543580 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:53.543650 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:53.543650 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:53.543650 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:53.544196 master-0 kubenswrapper[7518]: I0313 12:48:53.543687 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:54.542620 master-0 kubenswrapper[7518]: I0313 12:48:54.542496 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:54.542620 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:54.542620 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:54.542620 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:54.542620 master-0 kubenswrapper[7518]: I0313 12:48:54.542593 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:55.542974 master-0 kubenswrapper[7518]: I0313 12:48:55.542880 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:55.542974 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:55.542974 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:55.542974 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:55.542974 master-0 kubenswrapper[7518]: I0313 12:48:55.542973 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:56.448404 master-0 kubenswrapper[7518]: E0313 12:48:56.448270 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:48:56.541948 master-0 kubenswrapper[7518]: I0313 12:48:56.541904 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:56.541948 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:56.541948 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:56.541948 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:56.542412 master-0 kubenswrapper[7518]: I0313 12:48:56.542381 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:56.598008 master-0 kubenswrapper[7518]: I0313 12:48:56.597908 7518 scope.go:117] "RemoveContainer" containerID="870f37c47c4c47867bd607dfc7f5e2b18321f63b5705ab51a073513178f4a93d" Mar 13 12:48:57.542056 master-0 kubenswrapper[7518]: I0313 12:48:57.541988 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:57.542056 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:57.542056 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:57.542056 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:57.542465 master-0 kubenswrapper[7518]: I0313 12:48:57.542088 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:57.720609 master-0 kubenswrapper[7518]: I0313 12:48:57.720542 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/1.log" Mar 13 12:48:57.721192 master-0 kubenswrapper[7518]: I0313 12:48:57.720649 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" event={"ID":"c642c18f-f960-4418-bcb7-df884f8f8ad5","Type":"ContainerStarted","Data":"66bce1ffc4c0b981e45e9808ac9c5d4b5f8590e65596840ae0d2123b61b50990"} Mar 13 12:48:58.543410 master-0 kubenswrapper[7518]: I0313 12:48:58.543321 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:58.543410 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:58.543410 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:58.543410 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:58.543897 master-0 kubenswrapper[7518]: I0313 12:48:58.543420 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:48:59.542499 master-0 kubenswrapper[7518]: I0313 12:48:59.542373 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:48:59.542499 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:48:59.542499 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:48:59.542499 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:48:59.542499 master-0 kubenswrapper[7518]: I0313 12:48:59.542469 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:00.542505 master-0 kubenswrapper[7518]: I0313 12:49:00.542428 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:00.542505 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:00.542505 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:00.542505 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:00.543667 master-0 kubenswrapper[7518]: I0313 12:49:00.542539 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:01.351707 master-0 kubenswrapper[7518]: E0313 12:49:01.351498 7518 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c6758ff62fb70 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:46:25.216592752 +0000 UTC m=+539.849661979,LastTimestamp:2026-03-13 12:46:25.216592752 +0000 UTC m=+539.849661979,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:49:01.541986 master-0 kubenswrapper[7518]: I0313 12:49:01.541888 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:01.541986 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:01.541986 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:01.541986 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:01.542491 master-0 kubenswrapper[7518]: I0313 12:49:01.542007 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:02.541756 master-0 kubenswrapper[7518]: I0313 12:49:02.541688 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:02.541756 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:02.541756 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:02.541756 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:02.542368 master-0 kubenswrapper[7518]: I0313 12:49:02.541781 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:02.599942 master-0 kubenswrapper[7518]: I0313 12:49:02.599880 7518 scope.go:117] "RemoveContainer" containerID="db4e89d51ac70265662c9ba63d20dfe2538e991716870f677f78b9fb028c5609" Mar 13 12:49:02.600424 master-0 kubenswrapper[7518]: E0313 12:49:02.600380 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:03.542656 master-0 kubenswrapper[7518]: I0313 12:49:03.542545 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:03.542656 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:03.542656 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:03.542656 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:03.543986 master-0 kubenswrapper[7518]: I0313 12:49:03.542656 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:04.543862 master-0 kubenswrapper[7518]: I0313 12:49:04.543799 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:04.543862 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:04.543862 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:04.543862 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:04.544423 master-0 kubenswrapper[7518]: I0313 12:49:04.543873 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:05.543417 master-0 kubenswrapper[7518]: I0313 12:49:05.543280 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:05.543417 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:05.543417 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:05.543417 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:05.543417 master-0 kubenswrapper[7518]: I0313 12:49:05.543363 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:06.449515 master-0 kubenswrapper[7518]: E0313 12:49:06.449343 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:49:06.449790 master-0 kubenswrapper[7518]: E0313 12:49:06.449390 7518 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 12:49:06.550410 master-0 kubenswrapper[7518]: I0313 12:49:06.550339 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:06.550410 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:06.550410 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:06.550410 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:06.551102 master-0 kubenswrapper[7518]: I0313 12:49:06.550418 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:07.040342 master-0 kubenswrapper[7518]: E0313 12:49:07.040250 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:49:07.541497 master-0 kubenswrapper[7518]: I0313 12:49:07.541389 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:07.541497 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:07.541497 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:07.541497 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:07.541917 master-0 kubenswrapper[7518]: I0313 12:49:07.541499 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:08.492695 master-0 kubenswrapper[7518]: E0313 12:49:08.492630 7518 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 12:49:08.541363 master-0 kubenswrapper[7518]: I0313 12:49:08.541279 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:08.541363 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:08.541363 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:08.541363 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:08.541745 master-0 kubenswrapper[7518]: I0313 12:49:08.541371 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:08.800971 master-0 kubenswrapper[7518]: I0313 12:49:08.800893 7518 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:49:08.800971 master-0 kubenswrapper[7518]: I0313 12:49:08.800943 7518 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:49:09.542274 master-0 kubenswrapper[7518]: I0313 12:49:09.542177 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:09.542274 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:09.542274 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:09.542274 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:09.542274 master-0 kubenswrapper[7518]: I0313 12:49:09.542264 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:10.541937 master-0 kubenswrapper[7518]: I0313 12:49:10.541890 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:10.541937 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:10.541937 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:10.541937 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:10.542215 master-0 kubenswrapper[7518]: I0313 12:49:10.541961 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:11.319740 master-0 kubenswrapper[7518]: I0313 12:49:11.319644 7518 status_manager.go:851] "Failed to get status for pod" podUID="1f43b4e7-5cd1-46d2-a02e-0d846b2e5182" pod="openshift-network-node-identity/network-node-identity-qg8q5" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods network-node-identity-qg8q5)" Mar 13 12:49:11.542037 master-0 kubenswrapper[7518]: I0313 12:49:11.541967 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:11.542037 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:11.542037 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:11.542037 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:11.542575 master-0 kubenswrapper[7518]: I0313 12:49:11.542044 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:12.543269 master-0 kubenswrapper[7518]: I0313 12:49:12.543197 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:12.543269 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:12.543269 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:12.543269 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:12.543985 master-0 kubenswrapper[7518]: I0313 12:49:12.543282 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:13.541852 master-0 kubenswrapper[7518]: I0313 12:49:13.541756 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:13.541852 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:13.541852 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:13.541852 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:13.541852 master-0 kubenswrapper[7518]: I0313 12:49:13.541835 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:13.598940 master-0 kubenswrapper[7518]: I0313 12:49:13.598865 7518 scope.go:117] "RemoveContainer" containerID="db4e89d51ac70265662c9ba63d20dfe2538e991716870f677f78b9fb028c5609" Mar 13 12:49:14.541597 master-0 kubenswrapper[7518]: I0313 12:49:14.541538 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:14.541597 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:14.541597 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:14.541597 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:14.542459 master-0 kubenswrapper[7518]: I0313 12:49:14.542414 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:14.854997 master-0 kubenswrapper[7518]: I0313 12:49:14.854811 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b"} Mar 13 12:49:15.216505 master-0 kubenswrapper[7518]: I0313 12:49:15.216356 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:15.542006 master-0 kubenswrapper[7518]: I0313 12:49:15.541892 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:15.542006 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:15.542006 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:15.542006 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:15.542383 master-0 kubenswrapper[7518]: I0313 12:49:15.542016 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:16.542160 master-0 kubenswrapper[7518]: I0313 12:49:16.542096 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:16.542160 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:16.542160 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:16.542160 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:16.542696 master-0 kubenswrapper[7518]: I0313 12:49:16.542190 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:17.542000 master-0 kubenswrapper[7518]: I0313 12:49:17.541872 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:17.542000 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:17.542000 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:17.542000 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:17.542000 master-0 kubenswrapper[7518]: I0313 12:49:17.541987 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:18.541922 master-0 kubenswrapper[7518]: I0313 12:49:18.541842 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:18.541922 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:18.541922 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:18.541922 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:18.542966 master-0 kubenswrapper[7518]: I0313 12:49:18.541933 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:19.542666 master-0 kubenswrapper[7518]: I0313 12:49:19.542589 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:19.542666 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:19.542666 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:19.542666 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:19.542666 master-0 kubenswrapper[7518]: I0313 12:49:19.542665 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:20.542528 master-0 kubenswrapper[7518]: I0313 12:49:20.542422 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:20.542528 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:20.542528 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:20.542528 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:20.542528 master-0 kubenswrapper[7518]: I0313 12:49:20.542512 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:20.783316 master-0 kubenswrapper[7518]: I0313 12:49:20.783245 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:49:21.542494 master-0 kubenswrapper[7518]: I0313 12:49:21.542381 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:21.542494 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:21.542494 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:21.542494 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:21.542494 master-0 kubenswrapper[7518]: I0313 12:49:21.542487 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:22.543027 master-0 kubenswrapper[7518]: I0313 12:49:22.542956 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:22.543027 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:22.543027 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:22.543027 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:22.544002 master-0 kubenswrapper[7518]: I0313 12:49:22.543038 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:23.541557 master-0 kubenswrapper[7518]: I0313 12:49:23.541472 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:23.541557 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:23.541557 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:23.541557 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:23.541557 master-0 kubenswrapper[7518]: I0313 12:49:23.541528 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:24.041736 master-0 kubenswrapper[7518]: E0313 12:49:24.041626 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:49:24.146560 master-0 kubenswrapper[7518]: I0313 12:49:24.146465 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:24.543127 master-0 kubenswrapper[7518]: I0313 12:49:24.543034 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:24.543127 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:24.543127 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:24.543127 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:24.543127 master-0 kubenswrapper[7518]: I0313 12:49:24.543160 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:25.541817 master-0 kubenswrapper[7518]: I0313 12:49:25.541752 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:25.541817 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:25.541817 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:25.541817 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:25.542542 master-0 kubenswrapper[7518]: I0313 12:49:25.541822 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:26.543264 master-0 kubenswrapper[7518]: I0313 12:49:26.543013 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:49:26.543264 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:49:26.543264 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:49:26.543264 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:49:26.543264 master-0 kubenswrapper[7518]: I0313 12:49:26.543131 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:49:26.543264 master-0 kubenswrapper[7518]: I0313 12:49:26.543261 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:49:26.544918 master-0 kubenswrapper[7518]: I0313 12:49:26.544559 7518 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"c4f835c09db11145ad2a4fe25a302845b3cf71bff631c2bae9c2d15853a5abe8"} pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" containerMessage="Container router failed startup probe, will be restarted" Mar 13 12:49:26.544918 master-0 kubenswrapper[7518]: I0313 12:49:26.544648 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" containerID="cri-o://c4f835c09db11145ad2a4fe25a302845b3cf71bff631c2bae9c2d15853a5abe8" gracePeriod=3600 Mar 13 12:49:26.567305 master-0 kubenswrapper[7518]: E0313 12:49:26.567224 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:49:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:49:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:49:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:49:16Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:49:26.964241 master-0 kubenswrapper[7518]: I0313 12:49:26.964167 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/2.log" Mar 13 12:49:26.964588 master-0 kubenswrapper[7518]: I0313 12:49:26.964553 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/1.log" Mar 13 12:49:26.964759 master-0 kubenswrapper[7518]: I0313 12:49:26.964589 7518 generic.go:334] "Generic (PLEG): container finished" podID="c642c18f-f960-4418-bcb7-df884f8f8ad5" containerID="66bce1ffc4c0b981e45e9808ac9c5d4b5f8590e65596840ae0d2123b61b50990" exitCode=1 Mar 13 12:49:26.964759 master-0 kubenswrapper[7518]: I0313 12:49:26.964621 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" event={"ID":"c642c18f-f960-4418-bcb7-df884f8f8ad5","Type":"ContainerDied","Data":"66bce1ffc4c0b981e45e9808ac9c5d4b5f8590e65596840ae0d2123b61b50990"} Mar 13 12:49:26.964759 master-0 kubenswrapper[7518]: I0313 12:49:26.964656 7518 scope.go:117] "RemoveContainer" containerID="870f37c47c4c47867bd607dfc7f5e2b18321f63b5705ab51a073513178f4a93d" Mar 13 12:49:26.965159 master-0 kubenswrapper[7518]: I0313 12:49:26.965108 7518 scope.go:117] "RemoveContainer" containerID="66bce1ffc4c0b981e45e9808ac9c5d4b5f8590e65596840ae0d2123b61b50990" Mar 13 12:49:26.965338 master-0 kubenswrapper[7518]: E0313 12:49:26.965309 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-pjpn2_openshift-cluster-storage-operator(c642c18f-f960-4418-bcb7-df884f8f8ad5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" podUID="c642c18f-f960-4418-bcb7-df884f8f8ad5" Mar 13 12:49:27.147562 master-0 kubenswrapper[7518]: I0313 12:49:27.147398 7518 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 12:49:27.972885 master-0 kubenswrapper[7518]: I0313 12:49:27.972822 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/2.log" Mar 13 12:49:35.354385 master-0 kubenswrapper[7518]: E0313 12:49:35.354185 7518 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c67590c211ed4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:46:25.430380244 +0000 UTC m=+540.063449441,LastTimestamp:2026-03-13 12:46:25.430380244 +0000 UTC m=+540.063449441,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:49:36.045271 master-0 kubenswrapper[7518]: I0313 12:49:36.045216 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-l6jp5_317af639-269e-4163-8e24-fcea468b9352/cluster-baremetal-operator/1.log" Mar 13 12:49:36.046272 master-0 kubenswrapper[7518]: I0313 12:49:36.046245 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-l6jp5_317af639-269e-4163-8e24-fcea468b9352/cluster-baremetal-operator/0.log" Mar 13 12:49:36.046335 master-0 kubenswrapper[7518]: I0313 12:49:36.046292 7518 generic.go:334] "Generic (PLEG): container finished" podID="317af639-269e-4163-8e24-fcea468b9352" containerID="95a6bd22fb6c0c4b1137634707e1ef04230dc603ccfcb3a303f17aa2b6d154e3" exitCode=1 Mar 13 12:49:36.046371 master-0 kubenswrapper[7518]: I0313 12:49:36.046329 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" event={"ID":"317af639-269e-4163-8e24-fcea468b9352","Type":"ContainerDied","Data":"95a6bd22fb6c0c4b1137634707e1ef04230dc603ccfcb3a303f17aa2b6d154e3"} Mar 13 12:49:36.046371 master-0 kubenswrapper[7518]: I0313 12:49:36.046365 7518 scope.go:117] "RemoveContainer" containerID="31592103bc0b8de889024ea6d6f7d7d81a7a97c8aa34c21b276d7003e983eaa5" Mar 13 12:49:36.046860 master-0 kubenswrapper[7518]: I0313 12:49:36.046830 7518 scope.go:117] "RemoveContainer" containerID="95a6bd22fb6c0c4b1137634707e1ef04230dc603ccfcb3a303f17aa2b6d154e3" Mar 13 12:49:36.047180 master-0 kubenswrapper[7518]: E0313 12:49:36.047119 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-l6jp5_openshift-machine-api(317af639-269e-4163-8e24-fcea468b9352)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" podUID="317af639-269e-4163-8e24-fcea468b9352" Mar 13 12:49:36.568870 master-0 kubenswrapper[7518]: E0313 12:49:36.568555 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 13 12:49:37.054342 master-0 kubenswrapper[7518]: I0313 12:49:37.054287 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-l6jp5_317af639-269e-4163-8e24-fcea468b9352/cluster-baremetal-operator/1.log" Mar 13 12:49:37.147945 master-0 kubenswrapper[7518]: I0313 12:49:37.147866 7518 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:49:37.597999 master-0 kubenswrapper[7518]: I0313 12:49:37.597950 7518 scope.go:117] "RemoveContainer" containerID="66bce1ffc4c0b981e45e9808ac9c5d4b5f8590e65596840ae0d2123b61b50990" Mar 13 12:49:37.598585 master-0 kubenswrapper[7518]: E0313 12:49:37.598209 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-pjpn2_openshift-cluster-storage-operator(c642c18f-f960-4418-bcb7-df884f8f8ad5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" podUID="c642c18f-f960-4418-bcb7-df884f8f8ad5" Mar 13 12:49:41.043049 master-0 kubenswrapper[7518]: E0313 12:49:41.042937 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:49:42.804267 master-0 kubenswrapper[7518]: E0313 12:49:42.804178 7518 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 12:49:43.097791 master-0 kubenswrapper[7518]: I0313 12:49:43.097731 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"5cc34aac9149d80ee13d05fb99b57b8557bc192e4d7f099ae7781999fb6ddcb6"} Mar 13 12:49:44.109565 master-0 kubenswrapper[7518]: I0313 12:49:44.109491 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"912ff796850453c01df7cbeecc45cdb10c34a7fb4ccc08e76183a5f55eb1bcb5"} Mar 13 12:49:44.109565 master-0 kubenswrapper[7518]: I0313 12:49:44.109547 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"10e1034aafb2cd99b68fa2c04089a546d6fd7367b27440b5229a0245c44b9f38"} Mar 13 12:49:45.122618 master-0 kubenswrapper[7518]: I0313 12:49:45.122539 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"24a899f1f40a16e8df69e1053ad63adddd8eadeaaa916f3d6de11e212d873278"} Mar 13 12:49:45.122618 master-0 kubenswrapper[7518]: I0313 12:49:45.122591 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"c82691e79572302ede8d7dd4b4262e703b38e5a73e04bef601466f9e50d78d7d"} Mar 13 12:49:45.123126 master-0 kubenswrapper[7518]: I0313 12:49:45.122862 7518 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:49:45.123126 master-0 kubenswrapper[7518]: I0313 12:49:45.122877 7518 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:49:46.569836 master-0 kubenswrapper[7518]: E0313 12:49:46.569739 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:49:47.147181 master-0 kubenswrapper[7518]: I0313 12:49:47.147044 7518 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:49:47.147457 master-0 kubenswrapper[7518]: I0313 12:49:47.147224 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:47.148020 master-0 kubenswrapper[7518]: I0313 12:49:47.147979 7518 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 13 12:49:47.148098 master-0 kubenswrapper[7518]: I0313 12:49:47.148069 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" gracePeriod=30 Mar 13 12:49:47.269301 master-0 kubenswrapper[7518]: E0313 12:49:47.269246 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:48.143890 master-0 kubenswrapper[7518]: I0313 12:49:48.143852 7518 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" exitCode=2 Mar 13 12:49:48.144479 master-0 kubenswrapper[7518]: I0313 12:49:48.143926 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b"} Mar 13 12:49:48.144572 master-0 kubenswrapper[7518]: I0313 12:49:48.144561 7518 scope.go:117] "RemoveContainer" containerID="db4e89d51ac70265662c9ba63d20dfe2538e991716870f677f78b9fb028c5609" Mar 13 12:49:48.145196 master-0 kubenswrapper[7518]: I0313 12:49:48.145174 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:49:48.145484 master-0 kubenswrapper[7518]: E0313 12:49:48.145458 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:48.599118 master-0 kubenswrapper[7518]: I0313 12:49:48.598839 7518 scope.go:117] "RemoveContainer" containerID="95a6bd22fb6c0c4b1137634707e1ef04230dc603ccfcb3a303f17aa2b6d154e3" Mar 13 12:49:48.626747 master-0 kubenswrapper[7518]: I0313 12:49:48.626659 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 13 12:49:49.157539 master-0 kubenswrapper[7518]: I0313 12:49:49.157465 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-l6jp5_317af639-269e-4163-8e24-fcea468b9352/cluster-baremetal-operator/1.log" Mar 13 12:49:49.158228 master-0 kubenswrapper[7518]: I0313 12:49:49.157902 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" event={"ID":"317af639-269e-4163-8e24-fcea468b9352","Type":"ContainerStarted","Data":"15cedcb1b8553ec2f730223913ef265bc163bb67b8745c32aa558c39edcca0ac"} Mar 13 12:49:52.599130 master-0 kubenswrapper[7518]: I0313 12:49:52.599002 7518 scope.go:117] "RemoveContainer" containerID="66bce1ffc4c0b981e45e9808ac9c5d4b5f8590e65596840ae0d2123b61b50990" Mar 13 12:49:53.187373 master-0 kubenswrapper[7518]: I0313 12:49:53.187334 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/2.log" Mar 13 12:49:53.187674 master-0 kubenswrapper[7518]: I0313 12:49:53.187399 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" event={"ID":"c642c18f-f960-4418-bcb7-df884f8f8ad5","Type":"ContainerStarted","Data":"fe386e2cfe3b2db8724e6c5ea7592f727d6d5b2317f95ae6fc7b814707b7e83a"} Mar 13 12:49:53.626767 master-0 kubenswrapper[7518]: I0313 12:49:53.626706 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 13 12:49:53.654292 master-0 kubenswrapper[7518]: I0313 12:49:53.654235 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 13 12:49:56.570991 master-0 kubenswrapper[7518]: E0313 12:49:56.570921 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:49:58.007002 master-0 kubenswrapper[7518]: I0313 12:49:58.005288 7518 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:58.007002 master-0 kubenswrapper[7518]: I0313 12:49:58.005930 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:49:58.007002 master-0 kubenswrapper[7518]: E0313 12:49:58.006187 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:58.044307 master-0 kubenswrapper[7518]: E0313 12:49:58.044243 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:49:58.655948 master-0 kubenswrapper[7518]: I0313 12:49:58.655869 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 13 12:50:06.572276 master-0 kubenswrapper[7518]: E0313 12:50:06.572224 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:50:06.572906 master-0 kubenswrapper[7518]: E0313 12:50:06.572886 7518 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 12:50:09.358756 master-0 kubenswrapper[7518]: E0313 12:50:09.358584 7518 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c67590c211ed4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:46:25.430380244 +0000 UTC m=+540.063449441,LastTimestamp:2026-03-13 12:46:28.005929758 +0000 UTC m=+542.638998945,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:50:10.598853 master-0 kubenswrapper[7518]: I0313 12:50:10.598804 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:50:10.600306 master-0 kubenswrapper[7518]: E0313 12:50:10.600261 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:50:11.321923 master-0 kubenswrapper[7518]: I0313 12:50:11.321827 7518 status_manager.go:851] "Failed to get status for pod" podUID="e01de416-3de5-4357-a84e-f8eabb15a500" pod="openshift-etcd/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Mar 13 12:50:13.372856 master-0 kubenswrapper[7518]: I0313 12:50:13.372781 7518 generic.go:334] "Generic (PLEG): container finished" podID="45925a5e-41ae-4c19-b586-3151c7677612" containerID="c4f835c09db11145ad2a4fe25a302845b3cf71bff631c2bae9c2d15853a5abe8" exitCode=0 Mar 13 12:50:13.373913 master-0 kubenswrapper[7518]: I0313 12:50:13.372865 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" event={"ID":"45925a5e-41ae-4c19-b586-3151c7677612","Type":"ContainerDied","Data":"c4f835c09db11145ad2a4fe25a302845b3cf71bff631c2bae9c2d15853a5abe8"} Mar 13 12:50:13.373913 master-0 kubenswrapper[7518]: I0313 12:50:13.372921 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" event={"ID":"45925a5e-41ae-4c19-b586-3151c7677612","Type":"ContainerStarted","Data":"f4c4c4e5602a184f824d2367e7178507d9196d2b340284307f9055d03b447109"} Mar 13 12:50:13.373913 master-0 kubenswrapper[7518]: I0313 12:50:13.372948 7518 scope.go:117] "RemoveContainer" containerID="1033e2108ac67b4d3f75cb158efc6594f949bbad75576abf1a2d8dbd850e968d" Mar 13 12:50:13.539906 master-0 kubenswrapper[7518]: I0313 12:50:13.539829 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:50:13.544432 master-0 kubenswrapper[7518]: I0313 12:50:13.544314 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:13.544432 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:13.544432 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:13.544432 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:13.544987 master-0 kubenswrapper[7518]: I0313 12:50:13.544420 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:14.541954 master-0 kubenswrapper[7518]: I0313 12:50:14.541845 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:14.541954 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:14.541954 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:14.541954 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:14.542910 master-0 kubenswrapper[7518]: I0313 12:50:14.541954 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:15.045693 master-0 kubenswrapper[7518]: E0313 12:50:15.045617 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:50:15.541893 master-0 kubenswrapper[7518]: I0313 12:50:15.541805 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:15.541893 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:15.541893 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:15.541893 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:15.543105 master-0 kubenswrapper[7518]: I0313 12:50:15.543054 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:16.541400 master-0 kubenswrapper[7518]: I0313 12:50:16.541305 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:16.541400 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:16.541400 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:16.541400 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:16.541815 master-0 kubenswrapper[7518]: I0313 12:50:16.541446 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:17.541797 master-0 kubenswrapper[7518]: I0313 12:50:17.541731 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:17.541797 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:17.541797 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:17.541797 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:17.542405 master-0 kubenswrapper[7518]: I0313 12:50:17.541807 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:18.542251 master-0 kubenswrapper[7518]: I0313 12:50:18.542187 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:18.542251 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:18.542251 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:18.542251 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:18.543516 master-0 kubenswrapper[7518]: I0313 12:50:18.543211 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:19.126687 master-0 kubenswrapper[7518]: E0313 12:50:19.126613 7518 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 12:50:19.422607 master-0 kubenswrapper[7518]: I0313 12:50:19.422404 7518 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:50:19.422607 master-0 kubenswrapper[7518]: I0313 12:50:19.422479 7518 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:50:19.539460 master-0 kubenswrapper[7518]: I0313 12:50:19.539371 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:50:19.545388 master-0 kubenswrapper[7518]: I0313 12:50:19.545320 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:19.545388 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:19.545388 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:19.545388 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:19.546285 master-0 kubenswrapper[7518]: I0313 12:50:19.545398 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:20.541123 master-0 kubenswrapper[7518]: I0313 12:50:20.541081 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:20.541123 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:20.541123 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:20.541123 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:20.541419 master-0 kubenswrapper[7518]: I0313 12:50:20.541150 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:21.542326 master-0 kubenswrapper[7518]: I0313 12:50:21.542227 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:21.542326 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:21.542326 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:21.542326 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:21.543803 master-0 kubenswrapper[7518]: I0313 12:50:21.542333 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:21.599437 master-0 kubenswrapper[7518]: I0313 12:50:21.599368 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:50:21.599944 master-0 kubenswrapper[7518]: E0313 12:50:21.599892 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:50:22.542746 master-0 kubenswrapper[7518]: I0313 12:50:22.542660 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:22.542746 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:22.542746 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:22.542746 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:22.543381 master-0 kubenswrapper[7518]: I0313 12:50:22.542774 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:23.453303 master-0 kubenswrapper[7518]: I0313 12:50:23.453209 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/3.log" Mar 13 12:50:23.454072 master-0 kubenswrapper[7518]: I0313 12:50:23.454014 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/2.log" Mar 13 12:50:23.454173 master-0 kubenswrapper[7518]: I0313 12:50:23.454098 7518 generic.go:334] "Generic (PLEG): container finished" podID="c642c18f-f960-4418-bcb7-df884f8f8ad5" containerID="fe386e2cfe3b2db8724e6c5ea7592f727d6d5b2317f95ae6fc7b814707b7e83a" exitCode=1 Mar 13 12:50:23.454220 master-0 kubenswrapper[7518]: I0313 12:50:23.454195 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" event={"ID":"c642c18f-f960-4418-bcb7-df884f8f8ad5","Type":"ContainerDied","Data":"fe386e2cfe3b2db8724e6c5ea7592f727d6d5b2317f95ae6fc7b814707b7e83a"} Mar 13 12:50:23.454278 master-0 kubenswrapper[7518]: I0313 12:50:23.454262 7518 scope.go:117] "RemoveContainer" containerID="66bce1ffc4c0b981e45e9808ac9c5d4b5f8590e65596840ae0d2123b61b50990" Mar 13 12:50:23.455114 master-0 kubenswrapper[7518]: I0313 12:50:23.455067 7518 scope.go:117] "RemoveContainer" containerID="fe386e2cfe3b2db8724e6c5ea7592f727d6d5b2317f95ae6fc7b814707b7e83a" Mar 13 12:50:23.455508 master-0 kubenswrapper[7518]: E0313 12:50:23.455462 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-pjpn2_openshift-cluster-storage-operator(c642c18f-f960-4418-bcb7-df884f8f8ad5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" podUID="c642c18f-f960-4418-bcb7-df884f8f8ad5" Mar 13 12:50:23.541612 master-0 kubenswrapper[7518]: I0313 12:50:23.541525 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:23.541612 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:23.541612 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:23.541612 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:23.542500 master-0 kubenswrapper[7518]: I0313 12:50:23.541634 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:24.463988 master-0 kubenswrapper[7518]: I0313 12:50:24.463928 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/3.log" Mar 13 12:50:24.542045 master-0 kubenswrapper[7518]: I0313 12:50:24.541994 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:24.542045 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:24.542045 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:24.542045 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:24.542497 master-0 kubenswrapper[7518]: I0313 12:50:24.542057 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:25.541333 master-0 kubenswrapper[7518]: I0313 12:50:25.541273 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:25.541333 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:25.541333 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:25.541333 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:25.541995 master-0 kubenswrapper[7518]: I0313 12:50:25.541341 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:26.542104 master-0 kubenswrapper[7518]: I0313 12:50:26.542051 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:26.542104 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:26.542104 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:26.542104 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:26.543228 master-0 kubenswrapper[7518]: I0313 12:50:26.542120 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:27.541644 master-0 kubenswrapper[7518]: I0313 12:50:27.541604 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:27.541644 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:27.541644 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:27.541644 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:27.542056 master-0 kubenswrapper[7518]: I0313 12:50:27.542025 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:28.541627 master-0 kubenswrapper[7518]: I0313 12:50:28.541574 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:28.541627 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:28.541627 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:28.541627 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:28.542195 master-0 kubenswrapper[7518]: I0313 12:50:28.541657 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:29.542841 master-0 kubenswrapper[7518]: I0313 12:50:29.542758 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:29.542841 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:29.542841 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:29.542841 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:29.544214 master-0 kubenswrapper[7518]: I0313 12:50:29.542853 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:30.541694 master-0 kubenswrapper[7518]: I0313 12:50:30.541620 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:30.541694 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:30.541694 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:30.541694 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:30.541694 master-0 kubenswrapper[7518]: I0313 12:50:30.541682 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:31.532026 master-0 kubenswrapper[7518]: I0313 12:50:31.531940 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/4.log" Mar 13 12:50:31.533717 master-0 kubenswrapper[7518]: I0313 12:50:31.533677 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/3.log" Mar 13 12:50:31.534584 master-0 kubenswrapper[7518]: I0313 12:50:31.534534 7518 generic.go:334] "Generic (PLEG): container finished" podID="2f79578c-bbfb-4968-893a-730deb4c01f9" containerID="25a4898dab96b21910d2f9f74a6d0f38ac67afd0471454539094f0cdc130c4f5" exitCode=1 Mar 13 12:50:31.534699 master-0 kubenswrapper[7518]: I0313 12:50:31.534611 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" event={"ID":"2f79578c-bbfb-4968-893a-730deb4c01f9","Type":"ContainerDied","Data":"25a4898dab96b21910d2f9f74a6d0f38ac67afd0471454539094f0cdc130c4f5"} Mar 13 12:50:31.535076 master-0 kubenswrapper[7518]: I0313 12:50:31.534728 7518 scope.go:117] "RemoveContainer" containerID="ae4dbec7c141edff956f746a70905658efa772c8e6c87f546534e12c26343588" Mar 13 12:50:31.535829 master-0 kubenswrapper[7518]: I0313 12:50:31.535784 7518 scope.go:117] "RemoveContainer" containerID="25a4898dab96b21910d2f9f74a6d0f38ac67afd0471454539094f0cdc130c4f5" Mar 13 12:50:31.536446 master-0 kubenswrapper[7518]: E0313 12:50:31.536395 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-ckl2j_openshift-ingress-operator(2f79578c-bbfb-4968-893a-730deb4c01f9)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" podUID="2f79578c-bbfb-4968-893a-730deb4c01f9" Mar 13 12:50:31.542962 master-0 kubenswrapper[7518]: I0313 12:50:31.542871 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:31.542962 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:31.542962 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:31.542962 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:31.543419 master-0 kubenswrapper[7518]: I0313 12:50:31.543047 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:32.047194 master-0 kubenswrapper[7518]: E0313 12:50:32.046831 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:50:32.540984 master-0 kubenswrapper[7518]: I0313 12:50:32.540929 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:32.540984 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:32.540984 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:32.540984 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:32.540984 master-0 kubenswrapper[7518]: I0313 12:50:32.540987 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:32.542786 master-0 kubenswrapper[7518]: I0313 12:50:32.542765 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/4.log" Mar 13 12:50:33.541544 master-0 kubenswrapper[7518]: I0313 12:50:33.541464 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:33.541544 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:33.541544 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:33.541544 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:33.542168 master-0 kubenswrapper[7518]: I0313 12:50:33.541570 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:34.542104 master-0 kubenswrapper[7518]: I0313 12:50:34.542058 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:34.542104 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:34.542104 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:34.542104 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:34.542664 master-0 kubenswrapper[7518]: I0313 12:50:34.542146 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:34.597679 master-0 kubenswrapper[7518]: I0313 12:50:34.597632 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:50:34.597952 master-0 kubenswrapper[7518]: E0313 12:50:34.597928 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:50:35.542560 master-0 kubenswrapper[7518]: I0313 12:50:35.542495 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:35.542560 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:35.542560 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:35.542560 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:35.543755 master-0 kubenswrapper[7518]: I0313 12:50:35.542569 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:35.599396 master-0 kubenswrapper[7518]: I0313 12:50:35.599266 7518 scope.go:117] "RemoveContainer" containerID="fe386e2cfe3b2db8724e6c5ea7592f727d6d5b2317f95ae6fc7b814707b7e83a" Mar 13 12:50:35.599746 master-0 kubenswrapper[7518]: E0313 12:50:35.599696 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-pjpn2_openshift-cluster-storage-operator(c642c18f-f960-4418-bcb7-df884f8f8ad5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" podUID="c642c18f-f960-4418-bcb7-df884f8f8ad5" Mar 13 12:50:36.542872 master-0 kubenswrapper[7518]: I0313 12:50:36.542794 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:36.542872 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:36.542872 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:36.542872 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:36.543944 master-0 kubenswrapper[7518]: I0313 12:50:36.542872 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:37.014369 master-0 kubenswrapper[7518]: E0313 12:50:37.014052 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:50:27Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:50:27Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:50:27Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:50:27Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:50:37.542549 master-0 kubenswrapper[7518]: I0313 12:50:37.542462 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:37.542549 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:37.542549 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:37.542549 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:37.542549 master-0 kubenswrapper[7518]: I0313 12:50:37.542538 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:38.542110 master-0 kubenswrapper[7518]: I0313 12:50:38.542000 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:38.542110 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:38.542110 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:38.542110 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:38.542110 master-0 kubenswrapper[7518]: I0313 12:50:38.542060 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:39.543194 master-0 kubenswrapper[7518]: I0313 12:50:39.543087 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:39.543194 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:39.543194 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:39.543194 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:39.544328 master-0 kubenswrapper[7518]: I0313 12:50:39.543207 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:40.542242 master-0 kubenswrapper[7518]: I0313 12:50:40.542108 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:40.542242 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:40.542242 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:40.542242 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:40.542721 master-0 kubenswrapper[7518]: I0313 12:50:40.542246 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:41.542266 master-0 kubenswrapper[7518]: I0313 12:50:41.542187 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:41.542266 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:41.542266 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:41.542266 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:41.542890 master-0 kubenswrapper[7518]: I0313 12:50:41.542266 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:42.542117 master-0 kubenswrapper[7518]: I0313 12:50:42.542059 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:42.542117 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:42.542117 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:42.542117 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:42.542697 master-0 kubenswrapper[7518]: I0313 12:50:42.542163 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:43.362023 master-0 kubenswrapper[7518]: E0313 12:50:43.361888 7518 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c6759c0be4154 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:BackOff,Message:Back-off restarting failed container kube-scheduler in pod bootstrap-kube-scheduler-master-0_kube-system(a1a56802af72ce1aac6b5077f1695ac0),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:46:28.460577108 +0000 UTC m=+543.093646285,LastTimestamp:2026-03-13 12:46:28.460577108 +0000 UTC m=+543.093646285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:50:43.542484 master-0 kubenswrapper[7518]: I0313 12:50:43.542420 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:43.542484 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:43.542484 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:43.542484 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:43.542484 master-0 kubenswrapper[7518]: I0313 12:50:43.542492 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:44.541626 master-0 kubenswrapper[7518]: I0313 12:50:44.541553 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:44.541626 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:44.541626 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:44.541626 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:44.542335 master-0 kubenswrapper[7518]: I0313 12:50:44.542255 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:45.541416 master-0 kubenswrapper[7518]: I0313 12:50:45.541321 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:45.541416 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:45.541416 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:45.541416 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:45.541416 master-0 kubenswrapper[7518]: I0313 12:50:45.541386 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:46.542741 master-0 kubenswrapper[7518]: I0313 12:50:46.542674 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:46.542741 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:46.542741 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:46.542741 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:46.543400 master-0 kubenswrapper[7518]: I0313 12:50:46.542757 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:46.598378 master-0 kubenswrapper[7518]: I0313 12:50:46.598316 7518 scope.go:117] "RemoveContainer" containerID="25a4898dab96b21910d2f9f74a6d0f38ac67afd0471454539094f0cdc130c4f5" Mar 13 12:50:46.599520 master-0 kubenswrapper[7518]: E0313 12:50:46.598652 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-ckl2j_openshift-ingress-operator(2f79578c-bbfb-4968-893a-730deb4c01f9)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" podUID="2f79578c-bbfb-4968-893a-730deb4c01f9" Mar 13 12:50:47.014969 master-0 kubenswrapper[7518]: E0313 12:50:47.014456 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:50:47.541600 master-0 kubenswrapper[7518]: I0313 12:50:47.541532 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:47.541600 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:47.541600 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:47.541600 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:47.541600 master-0 kubenswrapper[7518]: I0313 12:50:47.541589 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:47.598172 master-0 kubenswrapper[7518]: I0313 12:50:47.598108 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:50:47.598992 master-0 kubenswrapper[7518]: E0313 12:50:47.598344 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:50:48.540985 master-0 kubenswrapper[7518]: I0313 12:50:48.540923 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:48.540985 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:48.540985 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:48.540985 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:48.541407 master-0 kubenswrapper[7518]: I0313 12:50:48.541014 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:49.048960 master-0 kubenswrapper[7518]: E0313 12:50:49.048867 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:50:49.544674 master-0 kubenswrapper[7518]: I0313 12:50:49.544612 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 12:50:49.547603 master-0 kubenswrapper[7518]: I0313 12:50:49.547540 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:49.547603 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:49.547603 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:49.547603 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:49.547927 master-0 kubenswrapper[7518]: I0313 12:50:49.547613 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:49.554610 master-0 kubenswrapper[7518]: I0313 12:50:49.554555 7518 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 13 12:50:49.567959 master-0 kubenswrapper[7518]: I0313 12:50:49.567908 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 12:50:49.577161 master-0 kubenswrapper[7518]: I0313 12:50:49.575788 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zhgx2"] Mar 13 12:50:49.580356 master-0 kubenswrapper[7518]: I0313 12:50:49.580314 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zhgx2"] Mar 13 12:50:49.598680 master-0 kubenswrapper[7518]: I0313 12:50:49.598604 7518 scope.go:117] "RemoveContainer" containerID="fe386e2cfe3b2db8724e6c5ea7592f727d6d5b2317f95ae6fc7b814707b7e83a" Mar 13 12:50:49.598891 master-0 kubenswrapper[7518]: E0313 12:50:49.598815 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-pjpn2_openshift-cluster-storage-operator(c642c18f-f960-4418-bcb7-df884f8f8ad5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" podUID="c642c18f-f960-4418-bcb7-df884f8f8ad5" Mar 13 12:50:49.606722 master-0 kubenswrapper[7518]: I0313 12:50:49.606671 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0bb348a-f72d-462e-aec9-04e4600cc7f0" path="/var/lib/kubelet/pods/e0bb348a-f72d-462e-aec9-04e4600cc7f0/volumes" Mar 13 12:50:49.679121 master-0 kubenswrapper[7518]: I0313 12:50:49.679043 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-l6jp5_317af639-269e-4163-8e24-fcea468b9352/cluster-baremetal-operator/2.log" Mar 13 12:50:49.679385 master-0 kubenswrapper[7518]: I0313 12:50:49.679375 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-l6jp5_317af639-269e-4163-8e24-fcea468b9352/cluster-baremetal-operator/1.log" Mar 13 12:50:49.679646 master-0 kubenswrapper[7518]: I0313 12:50:49.679609 7518 generic.go:334] "Generic (PLEG): container finished" podID="317af639-269e-4163-8e24-fcea468b9352" containerID="15cedcb1b8553ec2f730223913ef265bc163bb67b8745c32aa558c39edcca0ac" exitCode=1 Mar 13 12:50:49.679646 master-0 kubenswrapper[7518]: I0313 12:50:49.679640 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" event={"ID":"317af639-269e-4163-8e24-fcea468b9352","Type":"ContainerDied","Data":"15cedcb1b8553ec2f730223913ef265bc163bb67b8745c32aa558c39edcca0ac"} Mar 13 12:50:49.679743 master-0 kubenswrapper[7518]: I0313 12:50:49.679671 7518 scope.go:117] "RemoveContainer" containerID="95a6bd22fb6c0c4b1137634707e1ef04230dc603ccfcb3a303f17aa2b6d154e3" Mar 13 12:50:49.680222 master-0 kubenswrapper[7518]: I0313 12:50:49.680181 7518 scope.go:117] "RemoveContainer" containerID="15cedcb1b8553ec2f730223913ef265bc163bb67b8745c32aa558c39edcca0ac" Mar 13 12:50:49.680440 master-0 kubenswrapper[7518]: E0313 12:50:49.680376 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-l6jp5_openshift-machine-api(317af639-269e-4163-8e24-fcea468b9352)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" podUID="317af639-269e-4163-8e24-fcea468b9352" Mar 13 12:50:50.541769 master-0 kubenswrapper[7518]: I0313 12:50:50.541691 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:50.541769 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:50.541769 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:50.541769 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:50.542280 master-0 kubenswrapper[7518]: I0313 12:50:50.541783 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:50.687653 master-0 kubenswrapper[7518]: I0313 12:50:50.687600 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-l6jp5_317af639-269e-4163-8e24-fcea468b9352/cluster-baremetal-operator/2.log" Mar 13 12:50:51.542048 master-0 kubenswrapper[7518]: I0313 12:50:51.541970 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:51.542048 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:51.542048 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:51.542048 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:51.542048 master-0 kubenswrapper[7518]: I0313 12:50:51.542044 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:52.542218 master-0 kubenswrapper[7518]: I0313 12:50:52.542087 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:52.542218 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:52.542218 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:52.542218 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:52.542218 master-0 kubenswrapper[7518]: I0313 12:50:52.542176 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:53.277933 master-0 kubenswrapper[7518]: I0313 12:50:53.277887 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": dial tcp 10.128.0.6:8443: connect: connection refused" start-of-body= Mar 13 12:50:53.278175 master-0 kubenswrapper[7518]: I0313 12:50:53.277951 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": dial tcp 10.128.0.6:8443: connect: connection refused" Mar 13 12:50:53.541863 master-0 kubenswrapper[7518]: I0313 12:50:53.541807 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:53.541863 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:53.541863 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:53.541863 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:53.541863 master-0 kubenswrapper[7518]: I0313 12:50:53.541869 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:53.575669 master-0 kubenswrapper[7518]: I0313 12:50:53.575603 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 13 12:50:53.716259 master-0 kubenswrapper[7518]: I0313 12:50:53.716084 7518 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="982c1c225b535e0fa3c9e5b01c4c3960b52c601ea135812c4af51bc13c9b4e1a" exitCode=0 Mar 13 12:50:53.716259 master-0 kubenswrapper[7518]: I0313 12:50:53.716177 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"982c1c225b535e0fa3c9e5b01c4c3960b52c601ea135812c4af51bc13c9b4e1a"} Mar 13 12:50:53.716814 master-0 kubenswrapper[7518]: I0313 12:50:53.716775 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:50:53.716814 master-0 kubenswrapper[7518]: I0313 12:50:53.716805 7518 scope.go:117] "RemoveContainer" containerID="982c1c225b535e0fa3c9e5b01c4c3960b52c601ea135812c4af51bc13c9b4e1a" Mar 13 12:50:53.719749 master-0 kubenswrapper[7518]: I0313 12:50:53.719679 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-cz8pc_3020d236-03e0-4916-97dd-f1085632ca43/cluster-node-tuning-operator/0.log" Mar 13 12:50:53.719749 master-0 kubenswrapper[7518]: I0313 12:50:53.719709 7518 generic.go:334] "Generic (PLEG): container finished" podID="3020d236-03e0-4916-97dd-f1085632ca43" containerID="89639adb88716cbb87bdb25b40c5ec231bc4f7820ddcadae78f527661f5a5581" exitCode=1 Mar 13 12:50:53.719848 master-0 kubenswrapper[7518]: I0313 12:50:53.719757 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" event={"ID":"3020d236-03e0-4916-97dd-f1085632ca43","Type":"ContainerDied","Data":"89639adb88716cbb87bdb25b40c5ec231bc4f7820ddcadae78f527661f5a5581"} Mar 13 12:50:53.720238 master-0 kubenswrapper[7518]: I0313 12:50:53.720211 7518 scope.go:117] "RemoveContainer" containerID="89639adb88716cbb87bdb25b40c5ec231bc4f7820ddcadae78f527661f5a5581" Mar 13 12:50:53.722446 master-0 kubenswrapper[7518]: I0313 12:50:53.722406 7518 generic.go:334] "Generic (PLEG): container finished" podID="c0f3e81c-f61d-430a-98e8-82e3b283fc73" containerID="4db2bc5c40e8683ca741e5bf890d717d8c9fa9c48b7ac41671352e56a94462da" exitCode=0 Mar 13 12:50:53.722542 master-0 kubenswrapper[7518]: I0313 12:50:53.722451 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" event={"ID":"c0f3e81c-f61d-430a-98e8-82e3b283fc73","Type":"ContainerDied","Data":"4db2bc5c40e8683ca741e5bf890d717d8c9fa9c48b7ac41671352e56a94462da"} Mar 13 12:50:53.722883 master-0 kubenswrapper[7518]: I0313 12:50:53.722825 7518 scope.go:117] "RemoveContainer" containerID="4db2bc5c40e8683ca741e5bf890d717d8c9fa9c48b7ac41671352e56a94462da" Mar 13 12:50:53.731313 master-0 kubenswrapper[7518]: I0313 12:50:53.731268 7518 generic.go:334] "Generic (PLEG): container finished" podID="bcf05594-4c10-4b54-a47c-d55e323f1f87" containerID="f4a916875b5dd7f287df508905d5d99ad3dbd91629a2c95a805f4ab66aa7996e" exitCode=0 Mar 13 12:50:53.731404 master-0 kubenswrapper[7518]: I0313 12:50:53.731350 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" event={"ID":"bcf05594-4c10-4b54-a47c-d55e323f1f87","Type":"ContainerDied","Data":"f4a916875b5dd7f287df508905d5d99ad3dbd91629a2c95a805f4ab66aa7996e"} Mar 13 12:50:53.732027 master-0 kubenswrapper[7518]: I0313 12:50:53.731994 7518 scope.go:117] "RemoveContainer" containerID="f4a916875b5dd7f287df508905d5d99ad3dbd91629a2c95a805f4ab66aa7996e" Mar 13 12:50:53.742079 master-0 kubenswrapper[7518]: I0313 12:50:53.739458 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-t8fb4_f0803181-4e37-43fa-8ddc-9c76d3f61817/openshift-config-operator/0.log" Mar 13 12:50:53.742451 master-0 kubenswrapper[7518]: I0313 12:50:53.742082 7518 generic.go:334] "Generic (PLEG): container finished" podID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerID="d775030cc9a2d771094d53b9310bcf873da42c7c6da6ec2e4bea962d923e448e" exitCode=0 Mar 13 12:50:53.742451 master-0 kubenswrapper[7518]: I0313 12:50:53.742130 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" event={"ID":"f0803181-4e37-43fa-8ddc-9c76d3f61817","Type":"ContainerDied","Data":"d775030cc9a2d771094d53b9310bcf873da42c7c6da6ec2e4bea962d923e448e"} Mar 13 12:50:53.742451 master-0 kubenswrapper[7518]: I0313 12:50:53.742294 7518 scope.go:117] "RemoveContainer" containerID="6b3b1b1d996a5cfa81d2f82133cbb61df8d0101269e29c3d8745b628b44289f9" Mar 13 12:50:53.748488 master-0 kubenswrapper[7518]: I0313 12:50:53.743216 7518 scope.go:117] "RemoveContainer" containerID="d775030cc9a2d771094d53b9310bcf873da42c7c6da6ec2e4bea962d923e448e" Mar 13 12:50:53.815629 master-0 kubenswrapper[7518]: I0313 12:50:53.815572 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:50:54.025865 master-0 kubenswrapper[7518]: E0313 12:50:54.025832 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:50:54.201557 master-0 kubenswrapper[7518]: E0313 12:50:54.201521 7518 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15b592d6_3c48_45d4_9172_d28632ae8995.slice/crio-conmon-5d11669c933e022e2eb1221b72c8dfc83094667fb6b7c0cba300ddb5b306a9d7.scope\": RecentStats: unable to find data in memory cache]" Mar 13 12:50:54.541521 master-0 kubenswrapper[7518]: I0313 12:50:54.541448 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:54.541521 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:54.541521 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:54.541521 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:54.541521 master-0 kubenswrapper[7518]: I0313 12:50:54.541514 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:54.750079 master-0 kubenswrapper[7518]: I0313 12:50:54.750016 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-cz8pc_3020d236-03e0-4916-97dd-f1085632ca43/cluster-node-tuning-operator/0.log" Mar 13 12:50:54.750647 master-0 kubenswrapper[7518]: I0313 12:50:54.750217 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" event={"ID":"3020d236-03e0-4916-97dd-f1085632ca43","Type":"ContainerStarted","Data":"184fd82888bb461cf3a198b029555bf20326d20d1a4e26522034957e53004534"} Mar 13 12:50:54.751967 master-0 kubenswrapper[7518]: I0313 12:50:54.751935 7518 generic.go:334] "Generic (PLEG): container finished" podID="15b592d6-3c48-45d4-9172-d28632ae8995" containerID="5d11669c933e022e2eb1221b72c8dfc83094667fb6b7c0cba300ddb5b306a9d7" exitCode=0 Mar 13 12:50:54.752107 master-0 kubenswrapper[7518]: I0313 12:50:54.752021 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" event={"ID":"15b592d6-3c48-45d4-9172-d28632ae8995","Type":"ContainerDied","Data":"5d11669c933e022e2eb1221b72c8dfc83094667fb6b7c0cba300ddb5b306a9d7"} Mar 13 12:50:54.752260 master-0 kubenswrapper[7518]: I0313 12:50:54.752244 7518 scope.go:117] "RemoveContainer" containerID="c3cc4d20a3385510f2813df129cea65d1b836444e4586b47995a2d6b48933eba" Mar 13 12:50:54.752812 master-0 kubenswrapper[7518]: I0313 12:50:54.752787 7518 scope.go:117] "RemoveContainer" containerID="5d11669c933e022e2eb1221b72c8dfc83094667fb6b7c0cba300ddb5b306a9d7" Mar 13 12:50:54.755812 master-0 kubenswrapper[7518]: I0313 12:50:54.755780 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" event={"ID":"c0f3e81c-f61d-430a-98e8-82e3b283fc73","Type":"ContainerStarted","Data":"5b374b425addd394c759c06a1075aa269e22132736b39c7bcbb72dc36e11eaa3"} Mar 13 12:50:54.757477 master-0 kubenswrapper[7518]: I0313 12:50:54.757454 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" event={"ID":"bcf05594-4c10-4b54-a47c-d55e323f1f87","Type":"ContainerStarted","Data":"9b5719902ba9b9439ff93aecae3b9590be723c2c97629bfc9eb6857fdca94224"} Mar 13 12:50:54.767713 master-0 kubenswrapper[7518]: I0313 12:50:54.767676 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" event={"ID":"f0803181-4e37-43fa-8ddc-9c76d3f61817","Type":"ContainerStarted","Data":"1ff41a201d4a84dbb0344337df256835e6a14ba7e5c0057366f4417ce40bfd03"} Mar 13 12:50:54.768291 master-0 kubenswrapper[7518]: I0313 12:50:54.768260 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:50:54.772043 master-0 kubenswrapper[7518]: I0313 12:50:54.771988 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"900938afabac5fa1e088933b80603fec360ec7d1d114a7496946027bc2a16500"} Mar 13 12:50:54.772676 master-0 kubenswrapper[7518]: I0313 12:50:54.772642 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:50:54.772948 master-0 kubenswrapper[7518]: E0313 12:50:54.772913 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:50:55.542279 master-0 kubenswrapper[7518]: I0313 12:50:55.542182 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:55.542279 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:55.542279 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:55.542279 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:55.542279 master-0 kubenswrapper[7518]: I0313 12:50:55.542244 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:55.780783 master-0 kubenswrapper[7518]: I0313 12:50:55.780693 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" event={"ID":"15b592d6-3c48-45d4-9172-d28632ae8995","Type":"ContainerStarted","Data":"3a35c91e8574bfebcfe48cb045711b3b25a8179dcba5055b2cf25c9a85b2df54"} Mar 13 12:50:56.070265 master-0 kubenswrapper[7518]: I0313 12:50:56.070195 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:50:56.070771 master-0 kubenswrapper[7518]: I0313 12:50:56.070736 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:50:56.070995 master-0 kubenswrapper[7518]: E0313 12:50:56.070957 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:50:56.454554 master-0 kubenswrapper[7518]: I0313 12:50:56.454408 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:50:56.542706 master-0 kubenswrapper[7518]: I0313 12:50:56.542610 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:56.542706 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:56.542706 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:56.542706 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:56.542706 master-0 kubenswrapper[7518]: I0313 12:50:56.542691 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:56.792054 master-0 kubenswrapper[7518]: I0313 12:50:56.791998 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-sqndx_d44112d1-b2a5-4b8d-b74d-1e91638508d5/cluster-autoscaler-operator/0.log" Mar 13 12:50:56.792593 master-0 kubenswrapper[7518]: I0313 12:50:56.792455 7518 generic.go:334] "Generic (PLEG): container finished" podID="d44112d1-b2a5-4b8d-b74d-1e91638508d5" containerID="aeb8cd6b223367e97ad7707f8724ad7c61808803218a16a895fbd3c7f77d6e4e" exitCode=255 Mar 13 12:50:56.792593 master-0 kubenswrapper[7518]: I0313 12:50:56.792524 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" event={"ID":"d44112d1-b2a5-4b8d-b74d-1e91638508d5","Type":"ContainerDied","Data":"aeb8cd6b223367e97ad7707f8724ad7c61808803218a16a895fbd3c7f77d6e4e"} Mar 13 12:50:56.793314 master-0 kubenswrapper[7518]: I0313 12:50:56.793275 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:50:56.793377 master-0 kubenswrapper[7518]: I0313 12:50:56.793331 7518 scope.go:117] "RemoveContainer" containerID="aeb8cd6b223367e97ad7707f8724ad7c61808803218a16a895fbd3c7f77d6e4e" Mar 13 12:50:56.793565 master-0 kubenswrapper[7518]: E0313 12:50:56.793532 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:50:57.015567 master-0 kubenswrapper[7518]: E0313 12:50:57.015405 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:50:57.543084 master-0 kubenswrapper[7518]: I0313 12:50:57.543000 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:57.543084 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:57.543084 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:57.543084 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:57.543084 master-0 kubenswrapper[7518]: I0313 12:50:57.543082 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:57.803934 master-0 kubenswrapper[7518]: I0313 12:50:57.803743 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-sqndx_d44112d1-b2a5-4b8d-b74d-1e91638508d5/cluster-autoscaler-operator/0.log" Mar 13 12:50:57.805038 master-0 kubenswrapper[7518]: I0313 12:50:57.804342 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" event={"ID":"d44112d1-b2a5-4b8d-b74d-1e91638508d5","Type":"ContainerStarted","Data":"7f858937d070eba6cb1fba537e2bb309b78e5b5338c0ad5d83845eb5d69f9e37"} Mar 13 12:50:58.543100 master-0 kubenswrapper[7518]: I0313 12:50:58.542988 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:58.543100 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:58.543100 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:58.543100 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:58.543659 master-0 kubenswrapper[7518]: I0313 12:50:58.543133 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:50:58.599011 master-0 kubenswrapper[7518]: I0313 12:50:58.598936 7518 scope.go:117] "RemoveContainer" containerID="25a4898dab96b21910d2f9f74a6d0f38ac67afd0471454539094f0cdc130c4f5" Mar 13 12:50:58.599519 master-0 kubenswrapper[7518]: E0313 12:50:58.599475 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-ckl2j_openshift-ingress-operator(2f79578c-bbfb-4968-893a-730deb4c01f9)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" podUID="2f79578c-bbfb-4968-893a-730deb4c01f9" Mar 13 12:50:59.070992 master-0 kubenswrapper[7518]: I0313 12:50:59.070893 7518 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:50:59.542880 master-0 kubenswrapper[7518]: I0313 12:50:59.542807 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:50:59.542880 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:50:59.542880 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:50:59.542880 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:50:59.543316 master-0 kubenswrapper[7518]: I0313 12:50:59.542896 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:00.278800 master-0 kubenswrapper[7518]: I0313 12:51:00.278677 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:51:00.279545 master-0 kubenswrapper[7518]: I0313 12:51:00.278817 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:00.542832 master-0 kubenswrapper[7518]: I0313 12:51:00.542721 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:00.542832 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:00.542832 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:00.542832 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:00.543342 master-0 kubenswrapper[7518]: I0313 12:51:00.542827 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:00.816648 master-0 kubenswrapper[7518]: I0313 12:51:00.816427 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:51:00.816648 master-0 kubenswrapper[7518]: I0313 12:51:00.816528 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:00.833089 master-0 kubenswrapper[7518]: I0313 12:51:00.832981 7518 generic.go:334] "Generic (PLEG): container finished" podID="d7d67915-d31e-46dc-bb2e-1a6f689dd875" containerID="39a04612253a7a25dd9ded024c4c70cc0d933a3064b287c0c85c828db13d75e3" exitCode=0 Mar 13 12:51:00.833428 master-0 kubenswrapper[7518]: I0313 12:51:00.833101 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" event={"ID":"d7d67915-d31e-46dc-bb2e-1a6f689dd875","Type":"ContainerDied","Data":"39a04612253a7a25dd9ded024c4c70cc0d933a3064b287c0c85c828db13d75e3"} Mar 13 12:51:00.834024 master-0 kubenswrapper[7518]: I0313 12:51:00.833971 7518 scope.go:117] "RemoveContainer" containerID="39a04612253a7a25dd9ded024c4c70cc0d933a3064b287c0c85c828db13d75e3" Mar 13 12:51:00.836543 master-0 kubenswrapper[7518]: I0313 12:51:00.836473 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-mjxcz_d5f63b6b-990a-444b-a954-d718036f2f6c/machine-api-operator/0.log" Mar 13 12:51:00.837461 master-0 kubenswrapper[7518]: I0313 12:51:00.837308 7518 generic.go:334] "Generic (PLEG): container finished" podID="d5f63b6b-990a-444b-a954-d718036f2f6c" containerID="a1bfd1c6ad70388a89e3729992c8e63cc9ebf64d39d05c00f30ae59118fb80de" exitCode=255 Mar 13 12:51:00.837461 master-0 kubenswrapper[7518]: I0313 12:51:00.837381 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" event={"ID":"d5f63b6b-990a-444b-a954-d718036f2f6c","Type":"ContainerDied","Data":"a1bfd1c6ad70388a89e3729992c8e63cc9ebf64d39d05c00f30ae59118fb80de"} Mar 13 12:51:00.838258 master-0 kubenswrapper[7518]: I0313 12:51:00.838211 7518 scope.go:117] "RemoveContainer" containerID="a1bfd1c6ad70388a89e3729992c8e63cc9ebf64d39d05c00f30ae59118fb80de" Mar 13 12:51:01.543111 master-0 kubenswrapper[7518]: I0313 12:51:01.543000 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:01.543111 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:01.543111 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:01.543111 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:01.543111 master-0 kubenswrapper[7518]: I0313 12:51:01.543097 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:01.598863 master-0 kubenswrapper[7518]: I0313 12:51:01.598774 7518 scope.go:117] "RemoveContainer" containerID="15cedcb1b8553ec2f730223913ef265bc163bb67b8745c32aa558c39edcca0ac" Mar 13 12:51:01.599222 master-0 kubenswrapper[7518]: E0313 12:51:01.599071 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-l6jp5_openshift-machine-api(317af639-269e-4163-8e24-fcea468b9352)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" podUID="317af639-269e-4163-8e24-fcea468b9352" Mar 13 12:51:01.846828 master-0 kubenswrapper[7518]: I0313 12:51:01.846660 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" event={"ID":"d7d67915-d31e-46dc-bb2e-1a6f689dd875","Type":"ContainerStarted","Data":"263a58b98f555d3f7c2ff38e0626589e09267980de8768725ea1d731638779d8"} Mar 13 12:51:01.848827 master-0 kubenswrapper[7518]: I0313 12:51:01.848796 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-mjxcz_d5f63b6b-990a-444b-a954-d718036f2f6c/machine-api-operator/0.log" Mar 13 12:51:01.849315 master-0 kubenswrapper[7518]: I0313 12:51:01.849269 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" event={"ID":"d5f63b6b-990a-444b-a954-d718036f2f6c","Type":"ContainerStarted","Data":"c253d045b400731d730897c8c1944c24ddd955de99f2e4f908a129116e60baa5"} Mar 13 12:51:02.544266 master-0 kubenswrapper[7518]: I0313 12:51:02.543409 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:02.544266 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:02.544266 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:02.544266 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:02.544266 master-0 kubenswrapper[7518]: I0313 12:51:02.543534 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:02.564751 master-0 kubenswrapper[7518]: E0313 12:51:02.564680 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 12:51:02.857294 master-0 kubenswrapper[7518]: I0313 12:51:02.857216 7518 generic.go:334] "Generic (PLEG): container finished" podID="089cfabc-9d3d-4260-bb16-8b5eaf73b3fa" containerID="13abf0479b13298ab465c691e26a5f91f167723c1dfd38a5ddfba43b7407cce4" exitCode=0 Mar 13 12:51:02.857500 master-0 kubenswrapper[7518]: I0313 12:51:02.857441 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" event={"ID":"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa","Type":"ContainerDied","Data":"13abf0479b13298ab465c691e26a5f91f167723c1dfd38a5ddfba43b7407cce4"} Mar 13 12:51:02.857500 master-0 kubenswrapper[7518]: I0313 12:51:02.857490 7518 scope.go:117] "RemoveContainer" containerID="814a1adb650838a7837cee0a591e9eba8984a73367ffe7b1b579ae47de6fda2a" Mar 13 12:51:02.857837 master-0 kubenswrapper[7518]: I0313 12:51:02.857810 7518 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:51:02.857837 master-0 kubenswrapper[7518]: I0313 12:51:02.857836 7518 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="12715056-f5d1-4df5-82a5-f0c637ce3700" Mar 13 12:51:02.858286 master-0 kubenswrapper[7518]: I0313 12:51:02.858216 7518 scope.go:117] "RemoveContainer" containerID="13abf0479b13298ab465c691e26a5f91f167723c1dfd38a5ddfba43b7407cce4" Mar 13 12:51:03.278065 master-0 kubenswrapper[7518]: I0313 12:51:03.277982 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:51:03.278295 master-0 kubenswrapper[7518]: I0313 12:51:03.278094 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:03.278295 master-0 kubenswrapper[7518]: I0313 12:51:03.278182 7518 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:51:03.279017 master-0 kubenswrapper[7518]: I0313 12:51:03.278976 7518 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"1ff41a201d4a84dbb0344337df256835e6a14ba7e5c0057366f4417ce40bfd03"} pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 13 12:51:03.279087 master-0 kubenswrapper[7518]: I0313 12:51:03.279040 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" containerID="cri-o://1ff41a201d4a84dbb0344337df256835e6a14ba7e5c0057366f4417ce40bfd03" gracePeriod=30 Mar 13 12:51:03.288844 master-0 kubenswrapper[7518]: I0313 12:51:03.288768 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": read tcp 10.128.0.2:55956->10.128.0.6:8443: read: connection reset by peer" start-of-body= Mar 13 12:51:03.289047 master-0 kubenswrapper[7518]: I0313 12:51:03.288868 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": read tcp 10.128.0.2:55956->10.128.0.6:8443: read: connection reset by peer" Mar 13 12:51:03.289539 master-0 kubenswrapper[7518]: I0313 12:51:03.289395 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": dial tcp 10.128.0.6:8443: connect: connection refused" start-of-body= Mar 13 12:51:03.289581 master-0 kubenswrapper[7518]: I0313 12:51:03.289530 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": dial tcp 10.128.0.6:8443: connect: connection refused" Mar 13 12:51:03.549767 master-0 kubenswrapper[7518]: I0313 12:51:03.549616 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:03.549767 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:03.549767 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:03.549767 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:03.549767 master-0 kubenswrapper[7518]: I0313 12:51:03.549720 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:03.598095 master-0 kubenswrapper[7518]: I0313 12:51:03.598045 7518 scope.go:117] "RemoveContainer" containerID="fe386e2cfe3b2db8724e6c5ea7592f727d6d5b2317f95ae6fc7b814707b7e83a" Mar 13 12:51:03.865426 master-0 kubenswrapper[7518]: I0313 12:51:03.865266 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" event={"ID":"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa","Type":"ContainerStarted","Data":"e5136eaf0048f31fcdbf74bdbd684713f13e3ef076359ed7e338eb6a81c855c9"} Mar 13 12:51:03.868164 master-0 kubenswrapper[7518]: I0313 12:51:03.868106 7518 generic.go:334] "Generic (PLEG): container finished" podID="034aaf8e-95df-4171-bae4-e7abe58d15f7" containerID="6a3d66ed3fc6a1fb717a2b2977fa5c6231d315f07c1d90d364eea56e7a5d7c86" exitCode=0 Mar 13 12:51:03.868258 master-0 kubenswrapper[7518]: I0313 12:51:03.868192 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" event={"ID":"034aaf8e-95df-4171-bae4-e7abe58d15f7","Type":"ContainerDied","Data":"6a3d66ed3fc6a1fb717a2b2977fa5c6231d315f07c1d90d364eea56e7a5d7c86"} Mar 13 12:51:03.868258 master-0 kubenswrapper[7518]: I0313 12:51:03.868252 7518 scope.go:117] "RemoveContainer" containerID="c27448fad258056de304ba3c30b9268468cc1c542046d6c37c21797efa146b54" Mar 13 12:51:03.870255 master-0 kubenswrapper[7518]: I0313 12:51:03.869759 7518 scope.go:117] "RemoveContainer" containerID="6a3d66ed3fc6a1fb717a2b2977fa5c6231d315f07c1d90d364eea56e7a5d7c86" Mar 13 12:51:03.873680 master-0 kubenswrapper[7518]: I0313 12:51:03.873532 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-t8fb4_f0803181-4e37-43fa-8ddc-9c76d3f61817/openshift-config-operator/2.log" Mar 13 12:51:03.874400 master-0 kubenswrapper[7518]: I0313 12:51:03.874368 7518 generic.go:334] "Generic (PLEG): container finished" podID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerID="1ff41a201d4a84dbb0344337df256835e6a14ba7e5c0057366f4417ce40bfd03" exitCode=255 Mar 13 12:51:03.874512 master-0 kubenswrapper[7518]: I0313 12:51:03.874432 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" event={"ID":"f0803181-4e37-43fa-8ddc-9c76d3f61817","Type":"ContainerDied","Data":"1ff41a201d4a84dbb0344337df256835e6a14ba7e5c0057366f4417ce40bfd03"} Mar 13 12:51:03.874512 master-0 kubenswrapper[7518]: I0313 12:51:03.874461 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" event={"ID":"f0803181-4e37-43fa-8ddc-9c76d3f61817","Type":"ContainerStarted","Data":"a6263b46ef0468012ae2a42f311e9cac52e2e484751651c3b1983eca4c709f1f"} Mar 13 12:51:03.874628 master-0 kubenswrapper[7518]: I0313 12:51:03.874616 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:51:03.876237 master-0 kubenswrapper[7518]: I0313 12:51:03.876210 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/3.log" Mar 13 12:51:03.876319 master-0 kubenswrapper[7518]: I0313 12:51:03.876252 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" event={"ID":"c642c18f-f960-4418-bcb7-df884f8f8ad5","Type":"ContainerStarted","Data":"c5dac29410c608c592ce2da4d646f5dae37752b356e4a615b5b9f8033e660a03"} Mar 13 12:51:04.104324 master-0 kubenswrapper[7518]: I0313 12:51:04.104281 7518 scope.go:117] "RemoveContainer" containerID="d775030cc9a2d771094d53b9310bcf873da42c7c6da6ec2e4bea962d923e448e" Mar 13 12:51:04.541923 master-0 kubenswrapper[7518]: I0313 12:51:04.541855 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:04.541923 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:04.541923 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:04.541923 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:04.542451 master-0 kubenswrapper[7518]: I0313 12:51:04.542397 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:04.885025 master-0 kubenswrapper[7518]: I0313 12:51:04.884831 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" event={"ID":"034aaf8e-95df-4171-bae4-e7abe58d15f7","Type":"ContainerStarted","Data":"7e1a789f36b99f7b9db4cdcedcfd24bb271dead908803a4048ef47b91982d5b3"} Mar 13 12:51:04.888902 master-0 kubenswrapper[7518]: I0313 12:51:04.888854 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-t8fb4_f0803181-4e37-43fa-8ddc-9c76d3f61817/openshift-config-operator/2.log" Mar 13 12:51:05.541773 master-0 kubenswrapper[7518]: I0313 12:51:05.541710 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:05.541773 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:05.541773 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:05.541773 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:05.542069 master-0 kubenswrapper[7518]: I0313 12:51:05.541781 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:06.051027 master-0 kubenswrapper[7518]: E0313 12:51:06.050907 7518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:51:06.543065 master-0 kubenswrapper[7518]: I0313 12:51:06.542990 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:06.543065 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:06.543065 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:06.543065 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:06.543421 master-0 kubenswrapper[7518]: I0313 12:51:06.543065 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:07.016125 master-0 kubenswrapper[7518]: E0313 12:51:07.015911 7518 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:07.542650 master-0 kubenswrapper[7518]: I0313 12:51:07.542537 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:07.542650 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:07.542650 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:07.542650 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:07.543376 master-0 kubenswrapper[7518]: I0313 12:51:07.542715 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:08.541957 master-0 kubenswrapper[7518]: I0313 12:51:08.541886 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:08.541957 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:08.541957 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:08.541957 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:08.541957 master-0 kubenswrapper[7518]: I0313 12:51:08.541952 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:09.069992 master-0 kubenswrapper[7518]: I0313 12:51:09.069888 7518 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:09.278511 master-0 kubenswrapper[7518]: I0313 12:51:09.278415 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:51:09.278739 master-0 kubenswrapper[7518]: I0313 12:51:09.278557 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:09.542251 master-0 kubenswrapper[7518]: I0313 12:51:09.542145 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:09.542251 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:09.542251 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:09.542251 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:09.542251 master-0 kubenswrapper[7518]: I0313 12:51:09.542210 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:09.598820 master-0 kubenswrapper[7518]: I0313 12:51:09.598756 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:51:09.599064 master-0 kubenswrapper[7518]: E0313 12:51:09.599031 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:51:09.816819 master-0 kubenswrapper[7518]: I0313 12:51:09.816679 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:51:09.816819 master-0 kubenswrapper[7518]: I0313 12:51:09.816778 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:10.541795 master-0 kubenswrapper[7518]: I0313 12:51:10.541721 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:10.541795 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:10.541795 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:10.541795 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:10.542344 master-0 kubenswrapper[7518]: I0313 12:51:10.541797 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:11.541065 master-0 kubenswrapper[7518]: I0313 12:51:11.541006 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:11.541065 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:11.541065 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:11.541065 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:11.541515 master-0 kubenswrapper[7518]: I0313 12:51:11.541082 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:11.951838 master-0 kubenswrapper[7518]: I0313 12:51:11.951802 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-hj2wk_8c62b15f-001a-4b64-b85f-348aefde5d1b/openshift-controller-manager-operator/0.log" Mar 13 12:51:11.952506 master-0 kubenswrapper[7518]: I0313 12:51:11.951849 7518 generic.go:334] "Generic (PLEG): container finished" podID="8c62b15f-001a-4b64-b85f-348aefde5d1b" containerID="0c1cf11fba8779c80d0da5e273c773daa5eb397179aa4efedaa5ea11988b99ed" exitCode=0 Mar 13 12:51:11.952506 master-0 kubenswrapper[7518]: I0313 12:51:11.951884 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" event={"ID":"8c62b15f-001a-4b64-b85f-348aefde5d1b","Type":"ContainerDied","Data":"0c1cf11fba8779c80d0da5e273c773daa5eb397179aa4efedaa5ea11988b99ed"} Mar 13 12:51:11.952506 master-0 kubenswrapper[7518]: I0313 12:51:11.951964 7518 scope.go:117] "RemoveContainer" containerID="50a86534e82c318c07e40c2eda167d8236002efbe5ace1ee2b94525f4f64c25b" Mar 13 12:51:11.953046 master-0 kubenswrapper[7518]: I0313 12:51:11.953014 7518 scope.go:117] "RemoveContainer" containerID="0c1cf11fba8779c80d0da5e273c773daa5eb397179aa4efedaa5ea11988b99ed" Mar 13 12:51:11.960592 master-0 kubenswrapper[7518]: I0313 12:51:11.960562 7518 generic.go:334] "Generic (PLEG): container finished" podID="f5775266-5e58-44ed-81cb-dfe3faf38add" containerID="e24974d7562637f30c354afb27ef4179bd234226ab89ce7552570f69e7ee23e6" exitCode=0 Mar 13 12:51:11.960874 master-0 kubenswrapper[7518]: I0313 12:51:11.960834 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" event={"ID":"f5775266-5e58-44ed-81cb-dfe3faf38add","Type":"ContainerDied","Data":"e24974d7562637f30c354afb27ef4179bd234226ab89ce7552570f69e7ee23e6"} Mar 13 12:51:11.961744 master-0 kubenswrapper[7518]: I0313 12:51:11.961719 7518 scope.go:117] "RemoveContainer" containerID="e24974d7562637f30c354afb27ef4179bd234226ab89ce7552570f69e7ee23e6" Mar 13 12:51:11.968559 master-0 kubenswrapper[7518]: I0313 12:51:11.968507 7518 generic.go:334] "Generic (PLEG): container finished" podID="4e279dcc-35e2-4503-babc-978ac208c150" containerID="6d3a11a8a9fe0d5dca51d9ed392850f6788ebc18ced1ae2a2591ab3c73418318" exitCode=0 Mar 13 12:51:11.968660 master-0 kubenswrapper[7518]: I0313 12:51:11.968628 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd" event={"ID":"4e279dcc-35e2-4503-babc-978ac208c150","Type":"ContainerDied","Data":"6d3a11a8a9fe0d5dca51d9ed392850f6788ebc18ced1ae2a2591ab3c73418318"} Mar 13 12:51:11.969474 master-0 kubenswrapper[7518]: I0313 12:51:11.969276 7518 scope.go:117] "RemoveContainer" containerID="6d3a11a8a9fe0d5dca51d9ed392850f6788ebc18ced1ae2a2591ab3c73418318" Mar 13 12:51:11.973961 master-0 kubenswrapper[7518]: I0313 12:51:11.973858 7518 generic.go:334] "Generic (PLEG): container finished" podID="0da84bb7-e936-49a0-96b5-614a1305d6a4" containerID="e0b901efadc576656657aa4dea0a09b5c987c11cdc88e24aaeef0848d60cd3b7" exitCode=0 Mar 13 12:51:11.974129 master-0 kubenswrapper[7518]: I0313 12:51:11.973955 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" event={"ID":"0da84bb7-e936-49a0-96b5-614a1305d6a4","Type":"ContainerDied","Data":"e0b901efadc576656657aa4dea0a09b5c987c11cdc88e24aaeef0848d60cd3b7"} Mar 13 12:51:11.975053 master-0 kubenswrapper[7518]: I0313 12:51:11.974952 7518 scope.go:117] "RemoveContainer" containerID="e0b901efadc576656657aa4dea0a09b5c987c11cdc88e24aaeef0848d60cd3b7" Mar 13 12:51:11.980201 master-0 kubenswrapper[7518]: I0313 12:51:11.980021 7518 generic.go:334] "Generic (PLEG): container finished" podID="d47a1118-c12f-4234-8c0f-1a2a47fa8a4f" containerID="f651f87ff531c82cf300379fcb01d86f8ea9306940ee3ed2300a4c0ed8856e65" exitCode=0 Mar 13 12:51:11.980279 master-0 kubenswrapper[7518]: I0313 12:51:11.980211 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" event={"ID":"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f","Type":"ContainerDied","Data":"f651f87ff531c82cf300379fcb01d86f8ea9306940ee3ed2300a4c0ed8856e65"} Mar 13 12:51:11.982484 master-0 kubenswrapper[7518]: I0313 12:51:11.982459 7518 scope.go:117] "RemoveContainer" containerID="f651f87ff531c82cf300379fcb01d86f8ea9306940ee3ed2300a4c0ed8856e65" Mar 13 12:51:11.983956 master-0 kubenswrapper[7518]: I0313 12:51:11.983927 7518 generic.go:334] "Generic (PLEG): container finished" podID="e25bef76-7020-4f86-8dee-a58ebed537d2" containerID="fefc52314f557d7c60fa165574ebac10c9ccc912b863ad03ae108b2ab17e6e90" exitCode=0 Mar 13 12:51:11.984070 master-0 kubenswrapper[7518]: I0313 12:51:11.984013 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" event={"ID":"e25bef76-7020-4f86-8dee-a58ebed537d2","Type":"ContainerDied","Data":"fefc52314f557d7c60fa165574ebac10c9ccc912b863ad03ae108b2ab17e6e90"} Mar 13 12:51:11.984906 master-0 kubenswrapper[7518]: I0313 12:51:11.984883 7518 scope.go:117] "RemoveContainer" containerID="fefc52314f557d7c60fa165574ebac10c9ccc912b863ad03ae108b2ab17e6e90" Mar 13 12:51:11.986264 master-0 kubenswrapper[7518]: I0313 12:51:11.986193 7518 generic.go:334] "Generic (PLEG): container finished" podID="18ffa620-dacc-4b09-be04-2c325f860813" containerID="bf5764c3d8fba8c40cba1931dc4f8b36f32584d349bb0fa8f02b7c483a7626de" exitCode=0 Mar 13 12:51:11.986772 master-0 kubenswrapper[7518]: I0313 12:51:11.986718 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" event={"ID":"18ffa620-dacc-4b09-be04-2c325f860813","Type":"ContainerDied","Data":"bf5764c3d8fba8c40cba1931dc4f8b36f32584d349bb0fa8f02b7c483a7626de"} Mar 13 12:51:11.987683 master-0 kubenswrapper[7518]: I0313 12:51:11.987650 7518 scope.go:117] "RemoveContainer" containerID="bf5764c3d8fba8c40cba1931dc4f8b36f32584d349bb0fa8f02b7c483a7626de" Mar 13 12:51:11.992507 master-0 kubenswrapper[7518]: I0313 12:51:11.992438 7518 generic.go:334] "Generic (PLEG): container finished" podID="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" containerID="dc8ec1aed61fa783f1383f45771cb4136de885100e0460aa1df476073926f5af" exitCode=0 Mar 13 12:51:11.992688 master-0 kubenswrapper[7518]: I0313 12:51:11.992559 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" event={"ID":"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a","Type":"ContainerDied","Data":"dc8ec1aed61fa783f1383f45771cb4136de885100e0460aa1df476073926f5af"} Mar 13 12:51:11.994056 master-0 kubenswrapper[7518]: I0313 12:51:11.993955 7518 scope.go:117] "RemoveContainer" containerID="dc8ec1aed61fa783f1383f45771cb4136de885100e0460aa1df476073926f5af" Mar 13 12:51:11.998905 master-0 kubenswrapper[7518]: I0313 12:51:11.998827 7518 scope.go:117] "RemoveContainer" containerID="b93548b4b4252ac17adfb04acbab06411e860b90fed7b1160d6dcde46321cd0a" Mar 13 12:51:12.001949 master-0 kubenswrapper[7518]: I0313 12:51:12.001905 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-kh6n9_4dd0fc2f-f2ee-4447-a747-04a178288cf0/network-operator/0.log" Mar 13 12:51:12.002096 master-0 kubenswrapper[7518]: I0313 12:51:12.001971 7518 generic.go:334] "Generic (PLEG): container finished" podID="4dd0fc2f-f2ee-4447-a747-04a178288cf0" containerID="bc5551e07868e81855eed958b9e358bd0715e00cec588a7af2b93942471edb38" exitCode=0 Mar 13 12:51:12.002096 master-0 kubenswrapper[7518]: I0313 12:51:12.002014 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" event={"ID":"4dd0fc2f-f2ee-4447-a747-04a178288cf0","Type":"ContainerDied","Data":"bc5551e07868e81855eed958b9e358bd0715e00cec588a7af2b93942471edb38"} Mar 13 12:51:12.002589 master-0 kubenswrapper[7518]: I0313 12:51:12.002559 7518 scope.go:117] "RemoveContainer" containerID="bc5551e07868e81855eed958b9e358bd0715e00cec588a7af2b93942471edb38" Mar 13 12:51:12.110576 master-0 kubenswrapper[7518]: I0313 12:51:12.110548 7518 scope.go:117] "RemoveContainer" containerID="7049109a836522af070e6bb63ef4a03a6cf57954c7a7d1ea2471e59144150127" Mar 13 12:51:12.202473 master-0 kubenswrapper[7518]: I0313 12:51:12.202435 7518 scope.go:117] "RemoveContainer" containerID="13a298fff8d915caaf89a785573e9b3488b88852d2c326a75e61c523b3cd60a0" Mar 13 12:51:12.275350 master-0 kubenswrapper[7518]: I0313 12:51:12.275311 7518 scope.go:117] "RemoveContainer" containerID="638f7edbf4d5a7bd9c1277ff74b0deabee140db71794ce849e8ed2fe8e2bdb95" Mar 13 12:51:12.278834 master-0 kubenswrapper[7518]: I0313 12:51:12.278789 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:51:12.278994 master-0 kubenswrapper[7518]: I0313 12:51:12.278849 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:12.542022 master-0 kubenswrapper[7518]: I0313 12:51:12.541947 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:12.542022 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:12.542022 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:12.542022 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:12.542366 master-0 kubenswrapper[7518]: I0313 12:51:12.542042 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:12.598146 master-0 kubenswrapper[7518]: I0313 12:51:12.598080 7518 scope.go:117] "RemoveContainer" containerID="25a4898dab96b21910d2f9f74a6d0f38ac67afd0471454539094f0cdc130c4f5" Mar 13 12:51:12.598366 master-0 kubenswrapper[7518]: E0313 12:51:12.598318 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-ckl2j_openshift-ingress-operator(2f79578c-bbfb-4968-893a-730deb4c01f9)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" podUID="2f79578c-bbfb-4968-893a-730deb4c01f9" Mar 13 12:51:12.598414 master-0 kubenswrapper[7518]: I0313 12:51:12.598367 7518 scope.go:117] "RemoveContainer" containerID="15cedcb1b8553ec2f730223913ef265bc163bb67b8745c32aa558c39edcca0ac" Mar 13 12:51:12.816327 master-0 kubenswrapper[7518]: I0313 12:51:12.816207 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:51:12.816641 master-0 kubenswrapper[7518]: I0313 12:51:12.816612 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:13.010090 master-0 kubenswrapper[7518]: I0313 12:51:13.010043 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" event={"ID":"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a","Type":"ContainerStarted","Data":"b1c8bffb77981597e5b8c9fb21aab025e517e9065d6aca343bfe5edb4d982c42"} Mar 13 12:51:13.012991 master-0 kubenswrapper[7518]: I0313 12:51:13.012952 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" event={"ID":"4dd0fc2f-f2ee-4447-a747-04a178288cf0","Type":"ContainerStarted","Data":"e275902c9a4a413e4b6769037fba2cb2b9837f0f6d44ecd6a96580c5760e34fc"} Mar 13 12:51:13.015972 master-0 kubenswrapper[7518]: I0313 12:51:13.015925 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" event={"ID":"e25bef76-7020-4f86-8dee-a58ebed537d2","Type":"ContainerStarted","Data":"af69b2b302dd616a7eab41e4664107f4373574eccef97ce56f195b115b45dbb6"} Mar 13 12:51:13.018210 master-0 kubenswrapper[7518]: I0313 12:51:13.018172 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" event={"ID":"18ffa620-dacc-4b09-be04-2c325f860813","Type":"ContainerStarted","Data":"e08ebb9b72b3d839ad590a0420d611fa422a407a310320bdb128182aa8a60b33"} Mar 13 12:51:13.018509 master-0 kubenswrapper[7518]: I0313 12:51:13.018486 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:51:13.020123 master-0 kubenswrapper[7518]: I0313 12:51:13.020087 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" event={"ID":"8c62b15f-001a-4b64-b85f-348aefde5d1b","Type":"ContainerStarted","Data":"cfce68263a1b1f1b6b3b7badcf066306ea4bcbe306b94fa51683376d3f7333c5"} Mar 13 12:51:13.022303 master-0 kubenswrapper[7518]: I0313 12:51:13.022260 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" event={"ID":"f5775266-5e58-44ed-81cb-dfe3faf38add","Type":"ContainerStarted","Data":"cda1b13c3c82dfc301ade7ffbe6d1f5bfc8f45164a95bb1917e256d735e8a3a8"} Mar 13 12:51:13.024650 master-0 kubenswrapper[7518]: I0313 12:51:13.024599 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" event={"ID":"0da84bb7-e936-49a0-96b5-614a1305d6a4","Type":"ContainerStarted","Data":"71e7e5072a048a760099498896e1c067dee1e6f72ffb5b6f9420ab73f6c32a32"} Mar 13 12:51:13.026820 master-0 kubenswrapper[7518]: I0313 12:51:13.026779 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" event={"ID":"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f","Type":"ContainerStarted","Data":"9cc18b6bfff2ee3f8d44eaa6a79c992c9a7dd908b879a1dbd52815598e4165a9"} Mar 13 12:51:13.029247 master-0 kubenswrapper[7518]: I0313 12:51:13.029199 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd" event={"ID":"4e279dcc-35e2-4503-babc-978ac208c150","Type":"ContainerStarted","Data":"942ad66ddc4533909539f5cb6f6e3f24e65997906dfcc58da785b167c47a54ad"} Mar 13 12:51:13.031215 master-0 kubenswrapper[7518]: I0313 12:51:13.031181 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-l6jp5_317af639-269e-4163-8e24-fcea468b9352/cluster-baremetal-operator/2.log" Mar 13 12:51:13.031600 master-0 kubenswrapper[7518]: I0313 12:51:13.031567 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" event={"ID":"317af639-269e-4163-8e24-fcea468b9352","Type":"ContainerStarted","Data":"d53973ffb0d8406047016591af157982a433435a91e69929f211e23064a4071b"} Mar 13 12:51:13.541090 master-0 kubenswrapper[7518]: I0313 12:51:13.540953 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:13.541090 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:13.541090 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:13.541090 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:13.541090 master-0 kubenswrapper[7518]: I0313 12:51:13.541085 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:14.019312 master-0 kubenswrapper[7518]: I0313 12:51:14.019249 7518 patch_prober.go:28] interesting pod/route-controller-manager-68c48d4f7d-k7drw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:51:14.019769 master-0 kubenswrapper[7518]: I0313 12:51:14.019345 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" podUID="18ffa620-dacc-4b09-be04-2c325f860813" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:14.542300 master-0 kubenswrapper[7518]: I0313 12:51:14.542230 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:14.542300 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:14.542300 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:14.542300 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:14.542632 master-0 kubenswrapper[7518]: I0313 12:51:14.542319 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:15.039401 master-0 kubenswrapper[7518]: I0313 12:51:15.039307 7518 patch_prober.go:28] interesting pod/route-controller-manager-68c48d4f7d-k7drw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:51:15.039401 master-0 kubenswrapper[7518]: I0313 12:51:15.039376 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" podUID="18ffa620-dacc-4b09-be04-2c325f860813" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:15.277979 master-0 kubenswrapper[7518]: I0313 12:51:15.277876 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:51:15.278239 master-0 kubenswrapper[7518]: I0313 12:51:15.278017 7518 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:15.278239 master-0 kubenswrapper[7518]: I0313 12:51:15.278089 7518 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:51:15.279006 master-0 kubenswrapper[7518]: I0313 12:51:15.278953 7518 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"a6263b46ef0468012ae2a42f311e9cac52e2e484751651c3b1983eca4c709f1f"} pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 13 12:51:15.279088 master-0 kubenswrapper[7518]: I0313 12:51:15.279025 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" containerID="cri-o://a6263b46ef0468012ae2a42f311e9cac52e2e484751651c3b1983eca4c709f1f" gracePeriod=30 Mar 13 12:51:15.541298 master-0 kubenswrapper[7518]: I0313 12:51:15.541234 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:15.541298 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:15.541298 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:15.541298 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:15.541545 master-0 kubenswrapper[7518]: I0313 12:51:15.541309 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:15.815797 master-0 kubenswrapper[7518]: I0313 12:51:15.815751 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:51:15.816043 master-0 kubenswrapper[7518]: I0313 12:51:15.815814 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:16.542129 master-0 kubenswrapper[7518]: I0313 12:51:16.542022 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:16.542129 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:16.542129 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:16.542129 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:16.542129 master-0 kubenswrapper[7518]: I0313 12:51:16.542084 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:16.816921 master-0 kubenswrapper[7518]: I0313 12:51:16.816738 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:51:16.817119 master-0 kubenswrapper[7518]: I0313 12:51:16.816910 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:17.543075 master-0 kubenswrapper[7518]: I0313 12:51:17.543012 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:17.543075 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:17.543075 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:17.543075 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:17.543635 master-0 kubenswrapper[7518]: I0313 12:51:17.543094 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:18.542495 master-0 kubenswrapper[7518]: I0313 12:51:18.542399 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:18.542495 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:18.542495 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:18.542495 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:18.542980 master-0 kubenswrapper[7518]: I0313 12:51:18.542508 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:18.816460 master-0 kubenswrapper[7518]: I0313 12:51:18.816294 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:51:18.816460 master-0 kubenswrapper[7518]: I0313 12:51:18.816394 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:19.071040 master-0 kubenswrapper[7518]: I0313 12:51:19.070867 7518 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:19.071499 master-0 kubenswrapper[7518]: I0313 12:51:19.071465 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:51:19.072725 master-0 kubenswrapper[7518]: I0313 12:51:19.072697 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:51:19.073016 master-0 kubenswrapper[7518]: I0313 12:51:19.072980 7518 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"900938afabac5fa1e088933b80603fec360ec7d1d114a7496946027bc2a16500"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 13 12:51:19.073400 master-0 kubenswrapper[7518]: I0313 12:51:19.073243 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" containerID="cri-o://900938afabac5fa1e088933b80603fec360ec7d1d114a7496946027bc2a16500" gracePeriod=30 Mar 13 12:51:19.542881 master-0 kubenswrapper[7518]: I0313 12:51:19.542761 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:19.542881 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:19.542881 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:19.542881 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:19.543614 master-0 kubenswrapper[7518]: I0313 12:51:19.542869 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:19.892795 master-0 kubenswrapper[7518]: I0313 12:51:19.892622 7518 patch_prober.go:28] interesting pod/route-controller-manager-68c48d4f7d-k7drw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:51:19.892795 master-0 kubenswrapper[7518]: I0313 12:51:19.892724 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" podUID="18ffa620-dacc-4b09-be04-2c325f860813" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:20.542212 master-0 kubenswrapper[7518]: I0313 12:51:20.542118 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:20.542212 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:20.542212 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:20.542212 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:20.542661 master-0 kubenswrapper[7518]: I0313 12:51:20.542625 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:21.542462 master-0 kubenswrapper[7518]: I0313 12:51:21.542347 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:21.542462 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:21.542462 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:21.542462 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:21.542462 master-0 kubenswrapper[7518]: I0313 12:51:21.542458 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:21.816767 master-0 kubenswrapper[7518]: I0313 12:51:21.816569 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:51:21.816767 master-0 kubenswrapper[7518]: I0313 12:51:21.816674 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:51:22.543087 master-0 kubenswrapper[7518]: I0313 12:51:22.542907 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:22.543087 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:22.543087 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:22.543087 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:22.543087 master-0 kubenswrapper[7518]: I0313 12:51:22.543224 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:23.542258 master-0 kubenswrapper[7518]: I0313 12:51:23.542057 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:23.542258 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:23.542258 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:23.542258 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:23.542651 master-0 kubenswrapper[7518]: I0313 12:51:23.542359 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:24.115310 master-0 kubenswrapper[7518]: I0313 12:51:24.115244 7518 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": read tcp 10.128.0.2:38484->10.128.0.6:8443: read: connection reset by peer" start-of-body= Mar 13 12:51:24.115861 master-0 kubenswrapper[7518]: I0313 12:51:24.115341 7518 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": read tcp 10.128.0.2:38484->10.128.0.6:8443: read: connection reset by peer" Mar 13 12:51:24.541362 master-0 kubenswrapper[7518]: I0313 12:51:24.541293 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:24.541362 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:24.541362 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:24.541362 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:24.541643 master-0 kubenswrapper[7518]: I0313 12:51:24.541365 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:25.003649 master-0 kubenswrapper[7518]: E0313 12:51:25.003596 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:51:25.133874 master-0 kubenswrapper[7518]: I0313 12:51:25.133752 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-t8fb4_f0803181-4e37-43fa-8ddc-9c76d3f61817/openshift-config-operator/3.log" Mar 13 12:51:25.134385 master-0 kubenswrapper[7518]: I0313 12:51:25.134125 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-t8fb4_f0803181-4e37-43fa-8ddc-9c76d3f61817/openshift-config-operator/2.log" Mar 13 12:51:25.134537 master-0 kubenswrapper[7518]: I0313 12:51:25.134495 7518 generic.go:334] "Generic (PLEG): container finished" podID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerID="a6263b46ef0468012ae2a42f311e9cac52e2e484751651c3b1983eca4c709f1f" exitCode=255 Mar 13 12:51:25.134599 master-0 kubenswrapper[7518]: I0313 12:51:25.134561 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" event={"ID":"f0803181-4e37-43fa-8ddc-9c76d3f61817","Type":"ContainerDied","Data":"a6263b46ef0468012ae2a42f311e9cac52e2e484751651c3b1983eca4c709f1f"} Mar 13 12:51:25.134646 master-0 kubenswrapper[7518]: I0313 12:51:25.134634 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" event={"ID":"f0803181-4e37-43fa-8ddc-9c76d3f61817","Type":"ContainerStarted","Data":"ce6832cba552d9b84b6dcb860fdccbc089c8b33a0f3f37d54464f4fd5c0f9f08"} Mar 13 12:51:25.134696 master-0 kubenswrapper[7518]: I0313 12:51:25.134658 7518 scope.go:117] "RemoveContainer" containerID="1ff41a201d4a84dbb0344337df256835e6a14ba7e5c0057366f4417ce40bfd03" Mar 13 12:51:25.134875 master-0 kubenswrapper[7518]: I0313 12:51:25.134840 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:51:25.136282 master-0 kubenswrapper[7518]: I0313 12:51:25.136243 7518 generic.go:334] "Generic (PLEG): container finished" podID="676b054a-e76f-425d-a6ff-3f1bea8b523e" containerID="01758a85bcc236e4926066681b9aa0286d195458c1cddadcb630f791db70a4ff" exitCode=0 Mar 13 12:51:25.136340 master-0 kubenswrapper[7518]: I0313 12:51:25.136309 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" event={"ID":"676b054a-e76f-425d-a6ff-3f1bea8b523e","Type":"ContainerDied","Data":"01758a85bcc236e4926066681b9aa0286d195458c1cddadcb630f791db70a4ff"} Mar 13 12:51:25.136676 master-0 kubenswrapper[7518]: I0313 12:51:25.136653 7518 scope.go:117] "RemoveContainer" containerID="01758a85bcc236e4926066681b9aa0286d195458c1cddadcb630f791db70a4ff" Mar 13 12:51:25.139693 master-0 kubenswrapper[7518]: I0313 12:51:25.139617 7518 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="900938afabac5fa1e088933b80603fec360ec7d1d114a7496946027bc2a16500" exitCode=255 Mar 13 12:51:25.139693 master-0 kubenswrapper[7518]: I0313 12:51:25.139650 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"900938afabac5fa1e088933b80603fec360ec7d1d114a7496946027bc2a16500"} Mar 13 12:51:25.139693 master-0 kubenswrapper[7518]: I0313 12:51:25.139674 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"c01d9a99bd192d1dcec1d6b82d10b0a4d0e1e32477c6f2dee5d3e54b144ca2b7"} Mar 13 12:51:25.140120 master-0 kubenswrapper[7518]: I0313 12:51:25.140101 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:51:25.140372 master-0 kubenswrapper[7518]: E0313 12:51:25.140349 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:51:25.156966 master-0 kubenswrapper[7518]: I0313 12:51:25.156914 7518 scope.go:117] "RemoveContainer" containerID="982c1c225b535e0fa3c9e5b01c4c3960b52c601ea135812c4af51bc13c9b4e1a" Mar 13 12:51:25.540847 master-0 kubenswrapper[7518]: I0313 12:51:25.540793 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:25.540847 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:25.540847 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:25.540847 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:25.541157 master-0 kubenswrapper[7518]: I0313 12:51:25.540868 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:26.069941 master-0 kubenswrapper[7518]: I0313 12:51:26.069887 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:51:26.075912 master-0 kubenswrapper[7518]: I0313 12:51:26.075860 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:51:26.149819 master-0 kubenswrapper[7518]: I0313 12:51:26.149764 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-t8fb4_f0803181-4e37-43fa-8ddc-9c76d3f61817/openshift-config-operator/3.log" Mar 13 12:51:26.152147 master-0 kubenswrapper[7518]: I0313 12:51:26.151871 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" event={"ID":"676b054a-e76f-425d-a6ff-3f1bea8b523e","Type":"ContainerStarted","Data":"d2c08d5eb06b982078bb0e221348e5f53351d12950d34263d181c0c715d12a5f"} Mar 13 12:51:26.153827 master-0 kubenswrapper[7518]: I0313 12:51:26.153794 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:51:26.154207 master-0 kubenswrapper[7518]: I0313 12:51:26.154162 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:51:26.154526 master-0 kubenswrapper[7518]: E0313 12:51:26.154491 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:51:26.542295 master-0 kubenswrapper[7518]: I0313 12:51:26.542209 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:26.542295 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:26.542295 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:26.542295 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:26.542626 master-0 kubenswrapper[7518]: I0313 12:51:26.542350 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:26.598780 master-0 kubenswrapper[7518]: I0313 12:51:26.598718 7518 scope.go:117] "RemoveContainer" containerID="25a4898dab96b21910d2f9f74a6d0f38ac67afd0471454539094f0cdc130c4f5" Mar 13 12:51:26.599120 master-0 kubenswrapper[7518]: E0313 12:51:26.599081 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-ckl2j_openshift-ingress-operator(2f79578c-bbfb-4968-893a-730deb4c01f9)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" podUID="2f79578c-bbfb-4968-893a-730deb4c01f9" Mar 13 12:51:27.062839 master-0 kubenswrapper[7518]: I0313 12:51:27.062769 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Mar 13 12:51:27.063113 master-0 kubenswrapper[7518]: E0313 12:51:27.063086 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e01de416-3de5-4357-a84e-f8eabb15a500" containerName="installer" Mar 13 12:51:27.063173 master-0 kubenswrapper[7518]: I0313 12:51:27.063114 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="e01de416-3de5-4357-a84e-f8eabb15a500" containerName="installer" Mar 13 12:51:27.063222 master-0 kubenswrapper[7518]: E0313 12:51:27.063196 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f8543a5-1639-4140-a18d-8b0c96821bae" containerName="installer" Mar 13 12:51:27.063222 master-0 kubenswrapper[7518]: I0313 12:51:27.063208 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f8543a5-1639-4140-a18d-8b0c96821bae" containerName="installer" Mar 13 12:51:27.063301 master-0 kubenswrapper[7518]: E0313 12:51:27.063227 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bb348a-f72d-462e-aec9-04e4600cc7f0" containerName="kube-multus-additional-cni-plugins" Mar 13 12:51:27.063301 master-0 kubenswrapper[7518]: I0313 12:51:27.063237 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bb348a-f72d-462e-aec9-04e4600cc7f0" containerName="kube-multus-additional-cni-plugins" Mar 13 12:51:27.063411 master-0 kubenswrapper[7518]: I0313 12:51:27.063383 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="e01de416-3de5-4357-a84e-f8eabb15a500" containerName="installer" Mar 13 12:51:27.063449 master-0 kubenswrapper[7518]: I0313 12:51:27.063420 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0bb348a-f72d-462e-aec9-04e4600cc7f0" containerName="kube-multus-additional-cni-plugins" Mar 13 12:51:27.063449 master-0 kubenswrapper[7518]: I0313 12:51:27.063436 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f8543a5-1639-4140-a18d-8b0c96821bae" containerName="installer" Mar 13 12:51:27.064060 master-0 kubenswrapper[7518]: I0313 12:51:27.064031 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 13 12:51:27.066809 master-0 kubenswrapper[7518]: I0313 12:51:27.066760 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-lgm6s" Mar 13 12:51:27.067590 master-0 kubenswrapper[7518]: I0313 12:51:27.067532 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 12:51:27.079535 master-0 kubenswrapper[7518]: I0313 12:51:27.079451 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Mar 13 12:51:27.160712 master-0 kubenswrapper[7518]: I0313 12:51:27.160670 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:51:27.161298 master-0 kubenswrapper[7518]: E0313 12:51:27.160877 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:51:27.169358 master-0 kubenswrapper[7518]: I0313 12:51:27.169285 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/03479326-c13f-40bb-9ed2-580bb05917a7-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"03479326-c13f-40bb-9ed2-580bb05917a7\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 13 12:51:27.169513 master-0 kubenswrapper[7518]: I0313 12:51:27.169450 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03479326-c13f-40bb-9ed2-580bb05917a7-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"03479326-c13f-40bb-9ed2-580bb05917a7\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 13 12:51:27.169562 master-0 kubenswrapper[7518]: I0313 12:51:27.169550 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03479326-c13f-40bb-9ed2-580bb05917a7-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"03479326-c13f-40bb-9ed2-580bb05917a7\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 13 12:51:27.270994 master-0 kubenswrapper[7518]: I0313 12:51:27.270901 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/03479326-c13f-40bb-9ed2-580bb05917a7-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"03479326-c13f-40bb-9ed2-580bb05917a7\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 13 12:51:27.271244 master-0 kubenswrapper[7518]: I0313 12:51:27.271079 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03479326-c13f-40bb-9ed2-580bb05917a7-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"03479326-c13f-40bb-9ed2-580bb05917a7\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 13 12:51:27.271244 master-0 kubenswrapper[7518]: I0313 12:51:27.271086 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/03479326-c13f-40bb-9ed2-580bb05917a7-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"03479326-c13f-40bb-9ed2-580bb05917a7\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 13 12:51:27.271244 master-0 kubenswrapper[7518]: I0313 12:51:27.271157 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03479326-c13f-40bb-9ed2-580bb05917a7-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"03479326-c13f-40bb-9ed2-580bb05917a7\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 13 12:51:27.271337 master-0 kubenswrapper[7518]: I0313 12:51:27.271287 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03479326-c13f-40bb-9ed2-580bb05917a7-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"03479326-c13f-40bb-9ed2-580bb05917a7\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 13 12:51:27.287437 master-0 kubenswrapper[7518]: I0313 12:51:27.287401 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03479326-c13f-40bb-9ed2-580bb05917a7-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"03479326-c13f-40bb-9ed2-580bb05917a7\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 13 12:51:27.383290 master-0 kubenswrapper[7518]: I0313 12:51:27.383165 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 13 12:51:27.542048 master-0 kubenswrapper[7518]: I0313 12:51:27.541811 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:27.542048 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:27.542048 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:27.542048 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:27.542048 master-0 kubenswrapper[7518]: I0313 12:51:27.541873 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:27.939701 master-0 kubenswrapper[7518]: I0313 12:51:27.939584 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Mar 13 12:51:27.948056 master-0 kubenswrapper[7518]: W0313 12:51:27.948008 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod03479326_c13f_40bb_9ed2_580bb05917a7.slice/crio-f9e1bcdaf83648cf25ec570e9e2bb43dc99c079203b2fc846498f786f34dd1ec WatchSource:0}: Error finding container f9e1bcdaf83648cf25ec570e9e2bb43dc99c079203b2fc846498f786f34dd1ec: Status 404 returned error can't find the container with id f9e1bcdaf83648cf25ec570e9e2bb43dc99c079203b2fc846498f786f34dd1ec Mar 13 12:51:28.169400 master-0 kubenswrapper[7518]: I0313 12:51:28.169324 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"03479326-c13f-40bb-9ed2-580bb05917a7","Type":"ContainerStarted","Data":"f9e1bcdaf83648cf25ec570e9e2bb43dc99c079203b2fc846498f786f34dd1ec"} Mar 13 12:51:28.542424 master-0 kubenswrapper[7518]: I0313 12:51:28.542345 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:28.542424 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:28.542424 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:28.542424 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:28.542738 master-0 kubenswrapper[7518]: I0313 12:51:28.542453 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:28.897235 master-0 kubenswrapper[7518]: I0313 12:51:28.897012 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:51:29.542224 master-0 kubenswrapper[7518]: I0313 12:51:29.542091 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:29.542224 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:29.542224 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:29.542224 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:29.542995 master-0 kubenswrapper[7518]: I0313 12:51:29.542266 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:29.818944 master-0 kubenswrapper[7518]: I0313 12:51:29.818794 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:51:30.187984 master-0 kubenswrapper[7518]: I0313 12:51:30.185389 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"03479326-c13f-40bb-9ed2-580bb05917a7","Type":"ContainerStarted","Data":"69ec82e15f99ac8946fd6f0ae65cca8b0db2d9d210589323567d60bcf1d59e01"} Mar 13 12:51:30.208669 master-0 kubenswrapper[7518]: I0313 12:51:30.208568 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" podStartSLOduration=3.208530577 podStartE2EDuration="3.208530577s" podCreationTimestamp="2026-03-13 12:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:51:30.206007949 +0000 UTC m=+844.839077176" watchObservedRunningTime="2026-03-13 12:51:30.208530577 +0000 UTC m=+844.841599774" Mar 13 12:51:30.542133 master-0 kubenswrapper[7518]: I0313 12:51:30.542056 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:30.542133 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:30.542133 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:30.542133 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:30.542133 master-0 kubenswrapper[7518]: I0313 12:51:30.542133 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:31.542125 master-0 kubenswrapper[7518]: I0313 12:51:31.542007 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:31.542125 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:31.542125 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:31.542125 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:31.542125 master-0 kubenswrapper[7518]: I0313 12:51:31.542104 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:32.542704 master-0 kubenswrapper[7518]: I0313 12:51:32.542575 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:32.542704 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:32.542704 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:32.542704 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:32.542704 master-0 kubenswrapper[7518]: I0313 12:51:32.542689 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:33.541314 master-0 kubenswrapper[7518]: I0313 12:51:33.541246 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:33.541314 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:33.541314 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:33.541314 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:33.541734 master-0 kubenswrapper[7518]: I0313 12:51:33.541336 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:34.217929 master-0 kubenswrapper[7518]: I0313 12:51:34.217817 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/4.log" Mar 13 12:51:34.218485 master-0 kubenswrapper[7518]: I0313 12:51:34.218375 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/3.log" Mar 13 12:51:34.218485 master-0 kubenswrapper[7518]: I0313 12:51:34.218441 7518 generic.go:334] "Generic (PLEG): container finished" podID="c642c18f-f960-4418-bcb7-df884f8f8ad5" containerID="c5dac29410c608c592ce2da4d646f5dae37752b356e4a615b5b9f8033e660a03" exitCode=1 Mar 13 12:51:34.218485 master-0 kubenswrapper[7518]: I0313 12:51:34.218475 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" event={"ID":"c642c18f-f960-4418-bcb7-df884f8f8ad5","Type":"ContainerDied","Data":"c5dac29410c608c592ce2da4d646f5dae37752b356e4a615b5b9f8033e660a03"} Mar 13 12:51:34.218590 master-0 kubenswrapper[7518]: I0313 12:51:34.218509 7518 scope.go:117] "RemoveContainer" containerID="fe386e2cfe3b2db8724e6c5ea7592f727d6d5b2317f95ae6fc7b814707b7e83a" Mar 13 12:51:34.219130 master-0 kubenswrapper[7518]: I0313 12:51:34.219085 7518 scope.go:117] "RemoveContainer" containerID="c5dac29410c608c592ce2da4d646f5dae37752b356e4a615b5b9f8033e660a03" Mar 13 12:51:34.219460 master-0 kubenswrapper[7518]: E0313 12:51:34.219429 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-pjpn2_openshift-cluster-storage-operator(c642c18f-f960-4418-bcb7-df884f8f8ad5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" podUID="c642c18f-f960-4418-bcb7-df884f8f8ad5" Mar 13 12:51:34.541484 master-0 kubenswrapper[7518]: I0313 12:51:34.541425 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:34.541484 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:34.541484 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:34.541484 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:34.541789 master-0 kubenswrapper[7518]: I0313 12:51:34.541541 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:35.225724 master-0 kubenswrapper[7518]: I0313 12:51:35.225691 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/4.log" Mar 13 12:51:35.542054 master-0 kubenswrapper[7518]: I0313 12:51:35.541973 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:35.542054 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:35.542054 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:35.542054 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:35.542484 master-0 kubenswrapper[7518]: I0313 12:51:35.542091 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:36.460222 master-0 kubenswrapper[7518]: I0313 12:51:36.460094 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:51:36.460767 master-0 kubenswrapper[7518]: I0313 12:51:36.460740 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:51:36.460975 master-0 kubenswrapper[7518]: E0313 12:51:36.460944 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:51:36.541257 master-0 kubenswrapper[7518]: I0313 12:51:36.541193 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:36.541257 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:36.541257 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:36.541257 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:36.541257 master-0 kubenswrapper[7518]: I0313 12:51:36.541258 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:37.540903 master-0 kubenswrapper[7518]: I0313 12:51:37.540833 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:37.540903 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:37.540903 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:37.540903 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:37.541477 master-0 kubenswrapper[7518]: I0313 12:51:37.540900 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:38.541285 master-0 kubenswrapper[7518]: I0313 12:51:38.541223 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:38.541285 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:38.541285 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:38.541285 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:38.541828 master-0 kubenswrapper[7518]: I0313 12:51:38.541287 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:39.542021 master-0 kubenswrapper[7518]: I0313 12:51:39.541954 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:39.542021 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:39.542021 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:39.542021 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:39.542795 master-0 kubenswrapper[7518]: I0313 12:51:39.542036 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:40.542676 master-0 kubenswrapper[7518]: I0313 12:51:40.542614 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:40.542676 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:40.542676 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:40.542676 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:40.543238 master-0 kubenswrapper[7518]: I0313 12:51:40.542684 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:40.922446 master-0 kubenswrapper[7518]: I0313 12:51:40.922264 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 13 12:51:40.923187 master-0 kubenswrapper[7518]: I0313 12:51:40.923163 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:51:40.925010 master-0 kubenswrapper[7518]: I0313 12:51:40.924976 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-7mx4m" Mar 13 12:51:40.925559 master-0 kubenswrapper[7518]: I0313 12:51:40.925531 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 12:51:40.937555 master-0 kubenswrapper[7518]: I0313 12:51:40.937496 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 13 12:51:41.098880 master-0 kubenswrapper[7518]: I0313 12:51:41.098805 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc3825c8-8381-4d19-b482-e9499a72a700-kube-api-access\") pod \"installer-2-master-0\" (UID: \"bc3825c8-8381-4d19-b482-e9499a72a700\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:51:41.099101 master-0 kubenswrapper[7518]: I0313 12:51:41.098989 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc3825c8-8381-4d19-b482-e9499a72a700-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"bc3825c8-8381-4d19-b482-e9499a72a700\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:51:41.099101 master-0 kubenswrapper[7518]: I0313 12:51:41.099077 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bc3825c8-8381-4d19-b482-e9499a72a700-var-lock\") pod \"installer-2-master-0\" (UID: \"bc3825c8-8381-4d19-b482-e9499a72a700\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:51:41.200573 master-0 kubenswrapper[7518]: I0313 12:51:41.200443 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc3825c8-8381-4d19-b482-e9499a72a700-kube-api-access\") pod \"installer-2-master-0\" (UID: \"bc3825c8-8381-4d19-b482-e9499a72a700\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:51:41.200573 master-0 kubenswrapper[7518]: I0313 12:51:41.200544 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc3825c8-8381-4d19-b482-e9499a72a700-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"bc3825c8-8381-4d19-b482-e9499a72a700\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:51:41.200797 master-0 kubenswrapper[7518]: I0313 12:51:41.200618 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bc3825c8-8381-4d19-b482-e9499a72a700-var-lock\") pod \"installer-2-master-0\" (UID: \"bc3825c8-8381-4d19-b482-e9499a72a700\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:51:41.200797 master-0 kubenswrapper[7518]: I0313 12:51:41.200693 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc3825c8-8381-4d19-b482-e9499a72a700-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"bc3825c8-8381-4d19-b482-e9499a72a700\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:51:41.200797 master-0 kubenswrapper[7518]: I0313 12:51:41.200714 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bc3825c8-8381-4d19-b482-e9499a72a700-var-lock\") pod \"installer-2-master-0\" (UID: \"bc3825c8-8381-4d19-b482-e9499a72a700\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:51:41.215308 master-0 kubenswrapper[7518]: I0313 12:51:41.215264 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc3825c8-8381-4d19-b482-e9499a72a700-kube-api-access\") pod \"installer-2-master-0\" (UID: \"bc3825c8-8381-4d19-b482-e9499a72a700\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:51:41.252409 master-0 kubenswrapper[7518]: I0313 12:51:41.252357 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:51:41.542780 master-0 kubenswrapper[7518]: I0313 12:51:41.542676 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:41.542780 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:41.542780 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:41.542780 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:41.543433 master-0 kubenswrapper[7518]: I0313 12:51:41.542812 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:41.599033 master-0 kubenswrapper[7518]: I0313 12:51:41.598954 7518 scope.go:117] "RemoveContainer" containerID="25a4898dab96b21910d2f9f74a6d0f38ac67afd0471454539094f0cdc130c4f5" Mar 13 12:51:41.599439 master-0 kubenswrapper[7518]: E0313 12:51:41.599379 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-ckl2j_openshift-ingress-operator(2f79578c-bbfb-4968-893a-730deb4c01f9)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" podUID="2f79578c-bbfb-4968-893a-730deb4c01f9" Mar 13 12:51:41.713873 master-0 kubenswrapper[7518]: I0313 12:51:41.713816 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 13 12:51:41.721540 master-0 kubenswrapper[7518]: W0313 12:51:41.721489 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbc3825c8_8381_4d19_b482_e9499a72a700.slice/crio-8e1088e0df5495c11b184ce6c8248adb0411207dd090af8621e1253e288aee81 WatchSource:0}: Error finding container 8e1088e0df5495c11b184ce6c8248adb0411207dd090af8621e1253e288aee81: Status 404 returned error can't find the container with id 8e1088e0df5495c11b184ce6c8248adb0411207dd090af8621e1253e288aee81 Mar 13 12:51:42.291711 master-0 kubenswrapper[7518]: I0313 12:51:42.291639 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"bc3825c8-8381-4d19-b482-e9499a72a700","Type":"ContainerStarted","Data":"36a99dc3a52618a9e4e7602094957952525bef75208a86d5faa34103a0a98d5e"} Mar 13 12:51:42.291711 master-0 kubenswrapper[7518]: I0313 12:51:42.291696 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"bc3825c8-8381-4d19-b482-e9499a72a700","Type":"ContainerStarted","Data":"8e1088e0df5495c11b184ce6c8248adb0411207dd090af8621e1253e288aee81"} Mar 13 12:51:42.308731 master-0 kubenswrapper[7518]: I0313 12:51:42.308636 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=2.308576631 podStartE2EDuration="2.308576631s" podCreationTimestamp="2026-03-13 12:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:51:42.307455791 +0000 UTC m=+856.940525008" watchObservedRunningTime="2026-03-13 12:51:42.308576631 +0000 UTC m=+856.941645818" Mar 13 12:51:42.541637 master-0 kubenswrapper[7518]: I0313 12:51:42.541559 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:42.541637 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:42.541637 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:42.541637 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:42.541998 master-0 kubenswrapper[7518]: I0313 12:51:42.541661 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:43.541856 master-0 kubenswrapper[7518]: I0313 12:51:43.541767 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:43.541856 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:43.541856 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:43.541856 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:43.541856 master-0 kubenswrapper[7518]: I0313 12:51:43.541839 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:44.541822 master-0 kubenswrapper[7518]: I0313 12:51:44.541755 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:44.541822 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:44.541822 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:44.541822 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:44.542672 master-0 kubenswrapper[7518]: I0313 12:51:44.541840 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:45.542096 master-0 kubenswrapper[7518]: I0313 12:51:45.541980 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:45.542096 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:45.542096 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:45.542096 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:45.542096 master-0 kubenswrapper[7518]: I0313 12:51:45.542077 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:46.543681 master-0 kubenswrapper[7518]: I0313 12:51:46.543614 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:46.543681 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:46.543681 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:46.543681 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:46.544527 master-0 kubenswrapper[7518]: I0313 12:51:46.543684 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:47.141175 master-0 kubenswrapper[7518]: I0313 12:51:47.139456 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 12:51:47.141175 master-0 kubenswrapper[7518]: I0313 12:51:47.140562 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:47.147483 master-0 kubenswrapper[7518]: I0313 12:51:47.147446 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 12:51:47.152306 master-0 kubenswrapper[7518]: I0313 12:51:47.152257 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 12:51:47.154160 master-0 kubenswrapper[7518]: I0313 12:51:47.154098 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-x9hp4" Mar 13 12:51:47.190256 master-0 kubenswrapper[7518]: I0313 12:51:47.186917 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1068645c-59cb-46a1-a8fd-6e91a453e4f8-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1068645c-59cb-46a1-a8fd-6e91a453e4f8\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:47.190256 master-0 kubenswrapper[7518]: I0313 12:51:47.187002 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1068645c-59cb-46a1-a8fd-6e91a453e4f8-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1068645c-59cb-46a1-a8fd-6e91a453e4f8\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:47.190256 master-0 kubenswrapper[7518]: I0313 12:51:47.187177 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1068645c-59cb-46a1-a8fd-6e91a453e4f8-var-lock\") pod \"installer-2-master-0\" (UID: \"1068645c-59cb-46a1-a8fd-6e91a453e4f8\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:47.288324 master-0 kubenswrapper[7518]: I0313 12:51:47.288281 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1068645c-59cb-46a1-a8fd-6e91a453e4f8-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1068645c-59cb-46a1-a8fd-6e91a453e4f8\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:47.288601 master-0 kubenswrapper[7518]: I0313 12:51:47.288586 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1068645c-59cb-46a1-a8fd-6e91a453e4f8-var-lock\") pod \"installer-2-master-0\" (UID: \"1068645c-59cb-46a1-a8fd-6e91a453e4f8\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:47.288713 master-0 kubenswrapper[7518]: I0313 12:51:47.288672 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1068645c-59cb-46a1-a8fd-6e91a453e4f8-var-lock\") pod \"installer-2-master-0\" (UID: \"1068645c-59cb-46a1-a8fd-6e91a453e4f8\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:47.288783 master-0 kubenswrapper[7518]: I0313 12:51:47.288771 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1068645c-59cb-46a1-a8fd-6e91a453e4f8-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1068645c-59cb-46a1-a8fd-6e91a453e4f8\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:47.288887 master-0 kubenswrapper[7518]: I0313 12:51:47.288812 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1068645c-59cb-46a1-a8fd-6e91a453e4f8-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1068645c-59cb-46a1-a8fd-6e91a453e4f8\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:47.306065 master-0 kubenswrapper[7518]: I0313 12:51:47.305997 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1068645c-59cb-46a1-a8fd-6e91a453e4f8-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1068645c-59cb-46a1-a8fd-6e91a453e4f8\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:47.463041 master-0 kubenswrapper[7518]: I0313 12:51:47.462917 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:47.541870 master-0 kubenswrapper[7518]: I0313 12:51:47.541794 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:47.541870 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:47.541870 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:47.541870 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:47.542182 master-0 kubenswrapper[7518]: I0313 12:51:47.541896 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:47.603107 master-0 kubenswrapper[7518]: I0313 12:51:47.603072 7518 scope.go:117] "RemoveContainer" containerID="c5dac29410c608c592ce2da4d646f5dae37752b356e4a615b5b9f8033e660a03" Mar 13 12:51:47.603563 master-0 kubenswrapper[7518]: E0313 12:51:47.603296 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-pjpn2_openshift-cluster-storage-operator(c642c18f-f960-4418-bcb7-df884f8f8ad5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" podUID="c642c18f-f960-4418-bcb7-df884f8f8ad5" Mar 13 12:51:47.956550 master-0 kubenswrapper[7518]: I0313 12:51:47.956508 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 12:51:48.334320 master-0 kubenswrapper[7518]: I0313 12:51:48.334265 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"1068645c-59cb-46a1-a8fd-6e91a453e4f8","Type":"ContainerStarted","Data":"a6c791da190986f00f4311e447074f11476893db89635c9eee711eeebe5edf41"} Mar 13 12:51:48.334320 master-0 kubenswrapper[7518]: I0313 12:51:48.334317 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"1068645c-59cb-46a1-a8fd-6e91a453e4f8","Type":"ContainerStarted","Data":"87e792644fcb45b717c9edfbbb9b45c62b018d5b0446987b41db9c0835974fce"} Mar 13 12:51:48.352030 master-0 kubenswrapper[7518]: I0313 12:51:48.351957 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=1.35194035 podStartE2EDuration="1.35194035s" podCreationTimestamp="2026-03-13 12:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:51:48.350113842 +0000 UTC m=+862.983183039" watchObservedRunningTime="2026-03-13 12:51:48.35194035 +0000 UTC m=+862.985009527" Mar 13 12:51:48.541712 master-0 kubenswrapper[7518]: I0313 12:51:48.541652 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:48.541712 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:48.541712 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:48.541712 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:48.541712 master-0 kubenswrapper[7518]: I0313 12:51:48.541714 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:49.541807 master-0 kubenswrapper[7518]: I0313 12:51:49.541727 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:49.541807 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:49.541807 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:49.541807 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:49.542732 master-0 kubenswrapper[7518]: I0313 12:51:49.541851 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:50.541981 master-0 kubenswrapper[7518]: I0313 12:51:50.541927 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:50.541981 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:50.541981 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:50.541981 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:50.542540 master-0 kubenswrapper[7518]: I0313 12:51:50.541997 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:51.542064 master-0 kubenswrapper[7518]: I0313 12:51:51.541937 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:51.542064 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:51.542064 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:51.542064 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:51.542064 master-0 kubenswrapper[7518]: I0313 12:51:51.542058 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:51.598939 master-0 kubenswrapper[7518]: I0313 12:51:51.598872 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:51:51.599480 master-0 kubenswrapper[7518]: E0313 12:51:51.599426 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:51:52.542062 master-0 kubenswrapper[7518]: I0313 12:51:52.541941 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:52.542062 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:52.542062 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:52.542062 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:52.542062 master-0 kubenswrapper[7518]: I0313 12:51:52.542032 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:53.541820 master-0 kubenswrapper[7518]: I0313 12:51:53.541758 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:53.541820 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:53.541820 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:53.541820 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:53.542109 master-0 kubenswrapper[7518]: I0313 12:51:53.541834 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:53.599054 master-0 kubenswrapper[7518]: I0313 12:51:53.598987 7518 scope.go:117] "RemoveContainer" containerID="25a4898dab96b21910d2f9f74a6d0f38ac67afd0471454539094f0cdc130c4f5" Mar 13 12:51:54.372589 master-0 kubenswrapper[7518]: I0313 12:51:54.372535 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/4.log" Mar 13 12:51:54.372998 master-0 kubenswrapper[7518]: I0313 12:51:54.372963 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" event={"ID":"2f79578c-bbfb-4968-893a-730deb4c01f9","Type":"ContainerStarted","Data":"062296caf4aa99e0b771a3fc7c5b24a99b64a55a1235aefba1f6f98aec258e8a"} Mar 13 12:51:54.541826 master-0 kubenswrapper[7518]: I0313 12:51:54.541777 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:54.541826 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:54.541826 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:54.541826 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:54.542156 master-0 kubenswrapper[7518]: I0313 12:51:54.541850 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:55.541677 master-0 kubenswrapper[7518]: I0313 12:51:55.541575 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:55.541677 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:55.541677 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:55.541677 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:55.541677 master-0 kubenswrapper[7518]: I0313 12:51:55.541663 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:56.542572 master-0 kubenswrapper[7518]: I0313 12:51:56.542503 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:56.542572 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:56.542572 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:56.542572 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:56.543268 master-0 kubenswrapper[7518]: I0313 12:51:56.542583 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:57.541630 master-0 kubenswrapper[7518]: I0313 12:51:57.541561 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:57.541630 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:57.541630 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:57.541630 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:57.541998 master-0 kubenswrapper[7518]: I0313 12:51:57.541633 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:58.542826 master-0 kubenswrapper[7518]: I0313 12:51:58.542701 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:58.542826 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:58.542826 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:58.542826 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:58.542826 master-0 kubenswrapper[7518]: I0313 12:51:58.542816 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:51:58.598766 master-0 kubenswrapper[7518]: I0313 12:51:58.598671 7518 scope.go:117] "RemoveContainer" containerID="c5dac29410c608c592ce2da4d646f5dae37752b356e4a615b5b9f8033e660a03" Mar 13 12:51:58.599038 master-0 kubenswrapper[7518]: E0313 12:51:58.598973 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-pjpn2_openshift-cluster-storage-operator(c642c18f-f960-4418-bcb7-df884f8f8ad5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" podUID="c642c18f-f960-4418-bcb7-df884f8f8ad5" Mar 13 12:51:59.541228 master-0 kubenswrapper[7518]: I0313 12:51:59.541161 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:51:59.541228 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:51:59.541228 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:51:59.541228 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:51:59.541513 master-0 kubenswrapper[7518]: I0313 12:51:59.541261 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:52:00.458288 master-0 kubenswrapper[7518]: I0313 12:52:00.458247 7518 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 12:52:00.458713 master-0 kubenswrapper[7518]: I0313 12:52:00.458494 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" containerID="cri-o://33dc3b8e25f77fb05b589ec8e3e510dade539a78b8f7492825619e6eaad51fe9" gracePeriod=30 Mar 13 12:52:00.459682 master-0 kubenswrapper[7518]: I0313 12:52:00.459627 7518 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 12:52:00.460026 master-0 kubenswrapper[7518]: E0313 12:52:00.459992 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:52:00.460026 master-0 kubenswrapper[7518]: I0313 12:52:00.460017 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:52:00.460170 master-0 kubenswrapper[7518]: E0313 12:52:00.460047 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:52:00.460170 master-0 kubenswrapper[7518]: I0313 12:52:00.460057 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:52:00.460273 master-0 kubenswrapper[7518]: I0313 12:52:00.460253 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:52:00.460337 master-0 kubenswrapper[7518]: I0313 12:52:00.460279 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:52:00.460467 master-0 kubenswrapper[7518]: E0313 12:52:00.460432 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:52:00.460467 master-0 kubenswrapper[7518]: I0313 12:52:00.460451 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:52:00.460625 master-0 kubenswrapper[7518]: I0313 12:52:00.460602 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:52:00.461726 master-0 kubenswrapper[7518]: I0313 12:52:00.461702 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:52:00.542442 master-0 kubenswrapper[7518]: I0313 12:52:00.542368 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:52:00.542442 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:52:00.542442 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:52:00.542442 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:52:00.542442 master-0 kubenswrapper[7518]: I0313 12:52:00.542440 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:52:00.578194 master-0 kubenswrapper[7518]: I0313 12:52:00.578078 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 12:52:00.671563 master-0 kubenswrapper[7518]: I0313 12:52:00.670218 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:52:00.671563 master-0 kubenswrapper[7518]: I0313 12:52:00.670383 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:52:00.772169 master-0 kubenswrapper[7518]: I0313 12:52:00.772089 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:52:00.772403 master-0 kubenswrapper[7518]: I0313 12:52:00.772186 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:52:00.772403 master-0 kubenswrapper[7518]: I0313 12:52:00.772241 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:52:00.772403 master-0 kubenswrapper[7518]: I0313 12:52:00.772313 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:52:00.785865 master-0 kubenswrapper[7518]: I0313 12:52:00.785833 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:52:00.819493 master-0 kubenswrapper[7518]: I0313 12:52:00.819451 7518 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="f64b5fcc-cad4-48a4-a0c1-74a70da407fb" Mar 13 12:52:00.873472 master-0 kubenswrapper[7518]: I0313 12:52:00.873427 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:52:00.975719 master-0 kubenswrapper[7518]: I0313 12:52:00.975656 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 13 12:52:00.975967 master-0 kubenswrapper[7518]: I0313 12:52:00.975744 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 13 12:52:00.975967 master-0 kubenswrapper[7518]: I0313 12:52:00.975852 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs" (OuterVolumeSpecName: "logs") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:00.975967 master-0 kubenswrapper[7518]: I0313 12:52:00.975953 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets" (OuterVolumeSpecName: "secrets") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:00.976177 master-0 kubenswrapper[7518]: I0313 12:52:00.976149 7518 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:00.976177 master-0 kubenswrapper[7518]: I0313 12:52:00.976168 7518 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:01.433701 master-0 kubenswrapper[7518]: I0313 12:52:01.433638 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe"} Mar 13 12:52:01.433895 master-0 kubenswrapper[7518]: I0313 12:52:01.433713 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"e5d948e1273e1470220b867c9ba1e37989036f410ce6248f38bb7d0ec9cfa912"} Mar 13 12:52:01.435599 master-0 kubenswrapper[7518]: I0313 12:52:01.435566 7518 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="33dc3b8e25f77fb05b589ec8e3e510dade539a78b8f7492825619e6eaad51fe9" exitCode=0 Mar 13 12:52:01.435677 master-0 kubenswrapper[7518]: I0313 12:52:01.435623 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d54f9c86fd46be5581997805399dc61e82749fea5be883d188b4c6364d1d55b9" Mar 13 12:52:01.435677 master-0 kubenswrapper[7518]: I0313 12:52:01.435643 7518 scope.go:117] "RemoveContainer" containerID="74509294773fbb5f73a8dd8c9003ceebee4b1e194cad14d7465b52eca3b8eaab" Mar 13 12:52:01.435677 master-0 kubenswrapper[7518]: I0313 12:52:01.435665 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:52:01.438479 master-0 kubenswrapper[7518]: I0313 12:52:01.438447 7518 generic.go:334] "Generic (PLEG): container finished" podID="03479326-c13f-40bb-9ed2-580bb05917a7" containerID="69ec82e15f99ac8946fd6f0ae65cca8b0db2d9d210589323567d60bcf1d59e01" exitCode=0 Mar 13 12:52:01.438528 master-0 kubenswrapper[7518]: I0313 12:52:01.438476 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"03479326-c13f-40bb-9ed2-580bb05917a7","Type":"ContainerDied","Data":"69ec82e15f99ac8946fd6f0ae65cca8b0db2d9d210589323567d60bcf1d59e01"} Mar 13 12:52:01.542415 master-0 kubenswrapper[7518]: I0313 12:52:01.542338 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:52:01.542415 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:52:01.542415 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:52:01.542415 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:52:01.542979 master-0 kubenswrapper[7518]: I0313 12:52:01.542407 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:52:01.619339 master-0 kubenswrapper[7518]: I0313 12:52:01.618589 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a56802af72ce1aac6b5077f1695ac0" path="/var/lib/kubelet/pods/a1a56802af72ce1aac6b5077f1695ac0/volumes" Mar 13 12:52:01.619339 master-0 kubenswrapper[7518]: I0313 12:52:01.618990 7518 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 13 12:52:01.806833 master-0 kubenswrapper[7518]: I0313 12:52:01.806804 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 12:52:01.806955 master-0 kubenswrapper[7518]: I0313 12:52:01.806938 7518 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="f64b5fcc-cad4-48a4-a0c1-74a70da407fb" Mar 13 12:52:01.813165 master-0 kubenswrapper[7518]: I0313 12:52:01.813104 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 12:52:01.813385 master-0 kubenswrapper[7518]: I0313 12:52:01.813365 7518 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="f64b5fcc-cad4-48a4-a0c1-74a70da407fb" Mar 13 12:52:02.447290 master-0 kubenswrapper[7518]: I0313 12:52:02.447220 7518 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe" exitCode=0 Mar 13 12:52:02.447290 master-0 kubenswrapper[7518]: I0313 12:52:02.447280 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerDied","Data":"99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe"} Mar 13 12:52:02.542334 master-0 kubenswrapper[7518]: I0313 12:52:02.542259 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:52:02.542334 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:52:02.542334 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:52:02.542334 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:52:02.542957 master-0 kubenswrapper[7518]: I0313 12:52:02.542652 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:52:02.608967 master-0 kubenswrapper[7518]: I0313 12:52:02.608911 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 12:52:02.609194 master-0 kubenswrapper[7518]: I0313 12:52:02.609164 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="1068645c-59cb-46a1-a8fd-6e91a453e4f8" containerName="installer" containerID="cri-o://a6c791da190986f00f4311e447074f11476893db89635c9eee711eeebe5edf41" gracePeriod=30 Mar 13 12:52:02.751668 master-0 kubenswrapper[7518]: I0313 12:52:02.751627 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 13 12:52:02.805455 master-0 kubenswrapper[7518]: I0313 12:52:02.805401 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/03479326-c13f-40bb-9ed2-580bb05917a7-var-lock\") pod \"03479326-c13f-40bb-9ed2-580bb05917a7\" (UID: \"03479326-c13f-40bb-9ed2-580bb05917a7\") " Mar 13 12:52:02.805653 master-0 kubenswrapper[7518]: I0313 12:52:02.805472 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03479326-c13f-40bb-9ed2-580bb05917a7-var-lock" (OuterVolumeSpecName: "var-lock") pod "03479326-c13f-40bb-9ed2-580bb05917a7" (UID: "03479326-c13f-40bb-9ed2-580bb05917a7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:02.805653 master-0 kubenswrapper[7518]: I0313 12:52:02.805496 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03479326-c13f-40bb-9ed2-580bb05917a7-kube-api-access\") pod \"03479326-c13f-40bb-9ed2-580bb05917a7\" (UID: \"03479326-c13f-40bb-9ed2-580bb05917a7\") " Mar 13 12:52:02.805653 master-0 kubenswrapper[7518]: I0313 12:52:02.805562 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03479326-c13f-40bb-9ed2-580bb05917a7-kubelet-dir\") pod \"03479326-c13f-40bb-9ed2-580bb05917a7\" (UID: \"03479326-c13f-40bb-9ed2-580bb05917a7\") " Mar 13 12:52:02.805849 master-0 kubenswrapper[7518]: I0313 12:52:02.805699 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03479326-c13f-40bb-9ed2-580bb05917a7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "03479326-c13f-40bb-9ed2-580bb05917a7" (UID: "03479326-c13f-40bb-9ed2-580bb05917a7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:02.805899 master-0 kubenswrapper[7518]: I0313 12:52:02.805846 7518 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03479326-c13f-40bb-9ed2-580bb05917a7-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:02.805899 master-0 kubenswrapper[7518]: I0313 12:52:02.805862 7518 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/03479326-c13f-40bb-9ed2-580bb05917a7-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:02.808086 master-0 kubenswrapper[7518]: I0313 12:52:02.808051 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03479326-c13f-40bb-9ed2-580bb05917a7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "03479326-c13f-40bb-9ed2-580bb05917a7" (UID: "03479326-c13f-40bb-9ed2-580bb05917a7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:52:02.907397 master-0 kubenswrapper[7518]: I0313 12:52:02.907323 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03479326-c13f-40bb-9ed2-580bb05917a7-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:03.472714 master-0 kubenswrapper[7518]: I0313 12:52:03.472672 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 13 12:52:03.473094 master-0 kubenswrapper[7518]: I0313 12:52:03.472725 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"03479326-c13f-40bb-9ed2-580bb05917a7","Type":"ContainerDied","Data":"f9e1bcdaf83648cf25ec570e9e2bb43dc99c079203b2fc846498f786f34dd1ec"} Mar 13 12:52:03.473233 master-0 kubenswrapper[7518]: I0313 12:52:03.473105 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e1bcdaf83648cf25ec570e9e2bb43dc99c079203b2fc846498f786f34dd1ec" Mar 13 12:52:03.478639 master-0 kubenswrapper[7518]: I0313 12:52:03.478589 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173"} Mar 13 12:52:03.478755 master-0 kubenswrapper[7518]: I0313 12:52:03.478642 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092"} Mar 13 12:52:03.542084 master-0 kubenswrapper[7518]: I0313 12:52:03.542014 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:52:03.542084 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:52:03.542084 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:52:03.542084 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:52:03.542084 master-0 kubenswrapper[7518]: I0313 12:52:03.542081 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:52:04.499621 master-0 kubenswrapper[7518]: I0313 12:52:04.498943 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f"} Mar 13 12:52:04.499621 master-0 kubenswrapper[7518]: I0313 12:52:04.499374 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:52:04.526104 master-0 kubenswrapper[7518]: I0313 12:52:04.523250 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=4.523230653 podStartE2EDuration="4.523230653s" podCreationTimestamp="2026-03-13 12:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:52:04.518878727 +0000 UTC m=+879.151947924" watchObservedRunningTime="2026-03-13 12:52:04.523230653 +0000 UTC m=+879.156299850" Mar 13 12:52:04.543691 master-0 kubenswrapper[7518]: I0313 12:52:04.543615 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:52:04.543691 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:52:04.543691 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:52:04.543691 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:52:04.543691 master-0 kubenswrapper[7518]: I0313 12:52:04.543682 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:52:04.599634 master-0 kubenswrapper[7518]: I0313 12:52:04.599569 7518 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:52:04.600062 master-0 kubenswrapper[7518]: E0313 12:52:04.600018 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:52:05.542174 master-0 kubenswrapper[7518]: I0313 12:52:05.542075 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:52:05.542174 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:52:05.542174 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:52:05.542174 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:52:05.542174 master-0 kubenswrapper[7518]: I0313 12:52:05.542175 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:52:06.147597 master-0 kubenswrapper[7518]: I0313 12:52:06.147501 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 13 12:52:06.148693 master-0 kubenswrapper[7518]: E0313 12:52:06.148123 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03479326-c13f-40bb-9ed2-580bb05917a7" containerName="installer" Mar 13 12:52:06.148693 master-0 kubenswrapper[7518]: I0313 12:52:06.148235 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="03479326-c13f-40bb-9ed2-580bb05917a7" containerName="installer" Mar 13 12:52:06.148693 master-0 kubenswrapper[7518]: I0313 12:52:06.148539 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="03479326-c13f-40bb-9ed2-580bb05917a7" containerName="installer" Mar 13 12:52:06.152185 master-0 kubenswrapper[7518]: I0313 12:52:06.149628 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:52:06.159193 master-0 kubenswrapper[7518]: I0313 12:52:06.157873 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 13 12:52:06.171428 master-0 kubenswrapper[7518]: I0313 12:52:06.170847 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1f6c3b0-411a-4553-a198-e684b49ec412-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c1f6c3b0-411a-4553-a198-e684b49ec412\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:52:06.171683 master-0 kubenswrapper[7518]: I0313 12:52:06.171454 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1f6c3b0-411a-4553-a198-e684b49ec412-var-lock\") pod \"installer-3-master-0\" (UID: \"c1f6c3b0-411a-4553-a198-e684b49ec412\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:52:06.171683 master-0 kubenswrapper[7518]: I0313 12:52:06.171521 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1f6c3b0-411a-4553-a198-e684b49ec412-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c1f6c3b0-411a-4553-a198-e684b49ec412\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:52:06.273367 master-0 kubenswrapper[7518]: I0313 12:52:06.273286 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1f6c3b0-411a-4553-a198-e684b49ec412-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c1f6c3b0-411a-4553-a198-e684b49ec412\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:52:06.273609 master-0 kubenswrapper[7518]: I0313 12:52:06.273570 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1f6c3b0-411a-4553-a198-e684b49ec412-var-lock\") pod \"installer-3-master-0\" (UID: \"c1f6c3b0-411a-4553-a198-e684b49ec412\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:52:06.273765 master-0 kubenswrapper[7518]: I0313 12:52:06.273743 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1f6c3b0-411a-4553-a198-e684b49ec412-var-lock\") pod \"installer-3-master-0\" (UID: \"c1f6c3b0-411a-4553-a198-e684b49ec412\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:52:06.273815 master-0 kubenswrapper[7518]: I0313 12:52:06.273803 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1f6c3b0-411a-4553-a198-e684b49ec412-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c1f6c3b0-411a-4553-a198-e684b49ec412\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:52:06.273943 master-0 kubenswrapper[7518]: I0313 12:52:06.273916 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1f6c3b0-411a-4553-a198-e684b49ec412-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c1f6c3b0-411a-4553-a198-e684b49ec412\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:52:06.289651 master-0 kubenswrapper[7518]: I0313 12:52:06.289579 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1f6c3b0-411a-4553-a198-e684b49ec412-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c1f6c3b0-411a-4553-a198-e684b49ec412\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:52:06.484430 master-0 kubenswrapper[7518]: I0313 12:52:06.484281 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:52:06.542980 master-0 kubenswrapper[7518]: I0313 12:52:06.542911 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:52:06.542980 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:52:06.542980 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:52:06.542980 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:52:06.543266 master-0 kubenswrapper[7518]: I0313 12:52:06.542983 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:52:06.921881 master-0 kubenswrapper[7518]: I0313 12:52:06.921830 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 13 12:52:06.929569 master-0 kubenswrapper[7518]: W0313 12:52:06.929494 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc1f6c3b0_411a_4553_a198_e684b49ec412.slice/crio-6f0d31b7d2a8b09b9acea28b33add1196f7dc351da9eee1d6abf47a178142184 WatchSource:0}: Error finding container 6f0d31b7d2a8b09b9acea28b33add1196f7dc351da9eee1d6abf47a178142184: Status 404 returned error can't find the container with id 6f0d31b7d2a8b09b9acea28b33add1196f7dc351da9eee1d6abf47a178142184 Mar 13 12:52:07.535347 master-0 kubenswrapper[7518]: I0313 12:52:07.535219 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"c1f6c3b0-411a-4553-a198-e684b49ec412","Type":"ContainerStarted","Data":"65f40a435518973d6c0909fbbcdd0cde8969ba4566e140a164b5d48f5e95d0de"} Mar 13 12:52:07.535941 master-0 kubenswrapper[7518]: I0313 12:52:07.535361 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"c1f6c3b0-411a-4553-a198-e684b49ec412","Type":"ContainerStarted","Data":"6f0d31b7d2a8b09b9acea28b33add1196f7dc351da9eee1d6abf47a178142184"} Mar 13 12:52:07.544238 master-0 kubenswrapper[7518]: I0313 12:52:07.544131 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:52:07.544238 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:52:07.544238 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:52:07.544238 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:52:07.544516 master-0 kubenswrapper[7518]: I0313 12:52:07.544286 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:52:07.560442 master-0 kubenswrapper[7518]: I0313 12:52:07.560359 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=1.560339292 podStartE2EDuration="1.560339292s" podCreationTimestamp="2026-03-13 12:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:52:07.554686371 +0000 UTC m=+882.187755568" watchObservedRunningTime="2026-03-13 12:52:07.560339292 +0000 UTC m=+882.193408479" Mar 13 12:52:08.542952 master-0 kubenswrapper[7518]: I0313 12:52:08.542817 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:52:08.542952 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:52:08.542952 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:52:08.542952 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:52:08.542952 master-0 kubenswrapper[7518]: I0313 12:52:08.542933 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:52:09.542034 master-0 kubenswrapper[7518]: I0313 12:52:09.541976 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:52:09.542034 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:52:09.542034 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:52:09.542034 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:52:09.542380 master-0 kubenswrapper[7518]: I0313 12:52:09.542045 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:52:09.598967 master-0 kubenswrapper[7518]: I0313 12:52:09.598897 7518 scope.go:117] "RemoveContainer" containerID="c5dac29410c608c592ce2da4d646f5dae37752b356e4a615b5b9f8033e660a03" Mar 13 12:52:09.600012 master-0 kubenswrapper[7518]: E0313 12:52:09.599160 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-pjpn2_openshift-cluster-storage-operator(c642c18f-f960-4418-bcb7-df884f8f8ad5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" podUID="c642c18f-f960-4418-bcb7-df884f8f8ad5" Mar 13 12:52:10.542477 master-0 kubenswrapper[7518]: I0313 12:52:10.542363 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:52:10.542477 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:52:10.542477 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:52:10.542477 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:52:10.542477 master-0 kubenswrapper[7518]: I0313 12:52:10.542449 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:52:11.543127 master-0 kubenswrapper[7518]: I0313 12:52:11.543049 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:52:11.543127 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:52:11.543127 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:52:11.543127 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:52:11.543127 master-0 kubenswrapper[7518]: I0313 12:52:11.543111 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:52:12.542221 master-0 kubenswrapper[7518]: I0313 12:52:12.542118 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:52:12.542221 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:52:12.542221 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:52:12.542221 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:52:12.542543 master-0 kubenswrapper[7518]: I0313 12:52:12.542273 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:52:12.542543 master-0 kubenswrapper[7518]: I0313 12:52:12.542374 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:52:12.543539 master-0 kubenswrapper[7518]: I0313 12:52:12.543489 7518 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"f4c4c4e5602a184f824d2367e7178507d9196d2b340284307f9055d03b447109"} pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" containerMessage="Container router failed startup probe, will be restarted" Mar 13 12:52:12.543895 master-0 kubenswrapper[7518]: I0313 12:52:12.543573 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" containerID="cri-o://f4c4c4e5602a184f824d2367e7178507d9196d2b340284307f9055d03b447109" gracePeriod=3600 Mar 13 12:52:14.668005 master-0 kubenswrapper[7518]: I0313 12:52:14.667927 7518 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 12:52:14.668480 master-0 kubenswrapper[7518]: I0313 12:52:14.668410 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" containerID="cri-o://c01d9a99bd192d1dcec1d6b82d10b0a4d0e1e32477c6f2dee5d3e54b144ca2b7" gracePeriod=30 Mar 13 12:52:14.670404 master-0 kubenswrapper[7518]: I0313 12:52:14.670342 7518 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:52:14.670866 master-0 kubenswrapper[7518]: E0313 12:52:14.670821 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.670866 master-0 kubenswrapper[7518]: I0313 12:52:14.670858 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.670957 master-0 kubenswrapper[7518]: E0313 12:52:14.670877 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.670957 master-0 kubenswrapper[7518]: I0313 12:52:14.670890 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.670957 master-0 kubenswrapper[7518]: E0313 12:52:14.670906 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:52:14.670957 master-0 kubenswrapper[7518]: I0313 12:52:14.670920 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:52:14.670957 master-0 kubenswrapper[7518]: E0313 12:52:14.670942 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.670957 master-0 kubenswrapper[7518]: I0313 12:52:14.670954 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.671118 master-0 kubenswrapper[7518]: E0313 12:52:14.670994 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:52:14.671118 master-0 kubenswrapper[7518]: I0313 12:52:14.671007 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:52:14.671118 master-0 kubenswrapper[7518]: E0313 12:52:14.671032 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:52:14.671118 master-0 kubenswrapper[7518]: I0313 12:52:14.671046 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:52:14.671118 master-0 kubenswrapper[7518]: E0313 12:52:14.671070 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.671118 master-0 kubenswrapper[7518]: I0313 12:52:14.671082 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.671355 master-0 kubenswrapper[7518]: I0313 12:52:14.671334 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.671388 master-0 kubenswrapper[7518]: I0313 12:52:14.671356 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:52:14.671388 master-0 kubenswrapper[7518]: I0313 12:52:14.671374 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.671456 master-0 kubenswrapper[7518]: I0313 12:52:14.671389 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.671456 master-0 kubenswrapper[7518]: I0313 12:52:14.671410 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.671456 master-0 kubenswrapper[7518]: I0313 12:52:14.671425 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:52:14.671456 master-0 kubenswrapper[7518]: I0313 12:52:14.671445 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.671581 master-0 kubenswrapper[7518]: I0313 12:52:14.671465 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:52:14.671751 master-0 kubenswrapper[7518]: E0313 12:52:14.671715 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.671751 master-0 kubenswrapper[7518]: I0313 12:52:14.671745 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.671813 master-0 kubenswrapper[7518]: E0313 12:52:14.671770 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.671813 master-0 kubenswrapper[7518]: I0313 12:52:14.671787 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.671870 master-0 kubenswrapper[7518]: E0313 12:52:14.671815 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.671870 master-0 kubenswrapper[7518]: I0313 12:52:14.671834 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.672613 master-0 kubenswrapper[7518]: I0313 12:52:14.672567 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.672676 master-0 kubenswrapper[7518]: I0313 12:52:14.672616 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:52:14.674280 master-0 kubenswrapper[7518]: I0313 12:52:14.674241 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:14.725739 master-0 kubenswrapper[7518]: I0313 12:52:14.725489 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/741a6830aaef63e92194dd05d0b4da3d-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"741a6830aaef63e92194dd05d0b4da3d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:14.725739 master-0 kubenswrapper[7518]: I0313 12:52:14.725627 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/741a6830aaef63e92194dd05d0b4da3d-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"741a6830aaef63e92194dd05d0b4da3d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:14.827553 master-0 kubenswrapper[7518]: I0313 12:52:14.827473 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/741a6830aaef63e92194dd05d0b4da3d-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"741a6830aaef63e92194dd05d0b4da3d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:14.827553 master-0 kubenswrapper[7518]: I0313 12:52:14.827527 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/741a6830aaef63e92194dd05d0b4da3d-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"741a6830aaef63e92194dd05d0b4da3d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:14.827880 master-0 kubenswrapper[7518]: I0313 12:52:14.827604 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/741a6830aaef63e92194dd05d0b4da3d-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"741a6830aaef63e92194dd05d0b4da3d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:14.827880 master-0 kubenswrapper[7518]: I0313 12:52:14.827649 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/741a6830aaef63e92194dd05d0b4da3d-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"741a6830aaef63e92194dd05d0b4da3d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:14.856187 master-0 kubenswrapper[7518]: I0313 12:52:14.854703 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:52:14.867076 master-0 kubenswrapper[7518]: I0313 12:52:14.859455 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:14.867076 master-0 kubenswrapper[7518]: I0313 12:52:14.867012 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:52:14.903329 master-0 kubenswrapper[7518]: I0313 12:52:14.903296 7518 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="2a95cc19-9f56-4025-b4b6-9028c4c8497c" Mar 13 12:52:14.928330 master-0 kubenswrapper[7518]: I0313 12:52:14.928289 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 12:52:14.928462 master-0 kubenswrapper[7518]: I0313 12:52:14.928422 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config" (OuterVolumeSpecName: "config") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:14.928462 master-0 kubenswrapper[7518]: I0313 12:52:14.928450 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 12:52:14.929282 master-0 kubenswrapper[7518]: I0313 12:52:14.928486 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 12:52:14.929282 master-0 kubenswrapper[7518]: I0313 12:52:14.928537 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 12:52:14.929282 master-0 kubenswrapper[7518]: I0313 12:52:14.928564 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:14.929282 master-0 kubenswrapper[7518]: I0313 12:52:14.928602 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 12:52:14.929282 master-0 kubenswrapper[7518]: I0313 12:52:14.928657 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs" (OuterVolumeSpecName: "logs") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:14.929282 master-0 kubenswrapper[7518]: I0313 12:52:14.928628 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets" (OuterVolumeSpecName: "secrets") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:14.929282 master-0 kubenswrapper[7518]: I0313 12:52:14.928767 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:14.929282 master-0 kubenswrapper[7518]: I0313 12:52:14.928916 7518 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:14.929282 master-0 kubenswrapper[7518]: I0313 12:52:14.928935 7518 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:14.929282 master-0 kubenswrapper[7518]: I0313 12:52:14.928948 7518 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:14.929282 master-0 kubenswrapper[7518]: I0313 12:52:14.928961 7518 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:14.929282 master-0 kubenswrapper[7518]: I0313 12:52:14.928972 7518 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:15.595940 master-0 kubenswrapper[7518]: I0313 12:52:15.595889 7518 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="c01d9a99bd192d1dcec1d6b82d10b0a4d0e1e32477c6f2dee5d3e54b144ca2b7" exitCode=0 Mar 13 12:52:15.596234 master-0 kubenswrapper[7518]: I0313 12:52:15.595953 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b912cc2fb7f1246b6e0fb7957cb5c167f818087772406214ca1bd3f180298fb" Mar 13 12:52:15.596234 master-0 kubenswrapper[7518]: I0313 12:52:15.595969 7518 scope.go:117] "RemoveContainer" containerID="900938afabac5fa1e088933b80603fec360ec7d1d114a7496946027bc2a16500" Mar 13 12:52:15.596234 master-0 kubenswrapper[7518]: I0313 12:52:15.596057 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:52:15.607384 master-0 kubenswrapper[7518]: I0313 12:52:15.607358 7518 generic.go:334] "Generic (PLEG): container finished" podID="bc3825c8-8381-4d19-b482-e9499a72a700" containerID="36a99dc3a52618a9e4e7602094957952525bef75208a86d5faa34103a0a98d5e" exitCode=0 Mar 13 12:52:15.634428 master-0 kubenswrapper[7518]: I0313 12:52:15.634346 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78c05e1499b533b83f091333d61f045" path="/var/lib/kubelet/pods/f78c05e1499b533b83f091333d61f045/volumes" Mar 13 12:52:15.644430 master-0 kubenswrapper[7518]: I0313 12:52:15.635843 7518 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 13 12:52:15.668130 master-0 kubenswrapper[7518]: I0313 12:52:15.668083 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 12:52:15.668130 master-0 kubenswrapper[7518]: I0313 12:52:15.668120 7518 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="2a95cc19-9f56-4025-b4b6-9028c4c8497c" Mar 13 12:52:15.668130 master-0 kubenswrapper[7518]: I0313 12:52:15.668167 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"bc3825c8-8381-4d19-b482-e9499a72a700","Type":"ContainerDied","Data":"36a99dc3a52618a9e4e7602094957952525bef75208a86d5faa34103a0a98d5e"} Mar 13 12:52:15.668810 master-0 kubenswrapper[7518]: I0313 12:52:15.668196 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"741a6830aaef63e92194dd05d0b4da3d","Type":"ContainerStarted","Data":"52372f90f3e518110cf1e64b9ff43ecce31d8c11b62d3766c284ad38e957707b"} Mar 13 12:52:15.668810 master-0 kubenswrapper[7518]: I0313 12:52:15.668216 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"741a6830aaef63e92194dd05d0b4da3d","Type":"ContainerStarted","Data":"45b191ee613240af89dae5f40970afaf7896448c3e2a3a3165bd85645b5d7288"} Mar 13 12:52:15.668810 master-0 kubenswrapper[7518]: I0313 12:52:15.668250 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"741a6830aaef63e92194dd05d0b4da3d","Type":"ContainerStarted","Data":"bdff0a0b2aea82bac9a3ab64499e43b6fe8e459f15bf1c50fed1c0bf1762fda9"} Mar 13 12:52:15.674115 master-0 kubenswrapper[7518]: I0313 12:52:15.674076 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 12:52:15.674235 master-0 kubenswrapper[7518]: I0313 12:52:15.674119 7518 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="2a95cc19-9f56-4025-b4b6-9028c4c8497c" Mar 13 12:52:16.622982 master-0 kubenswrapper[7518]: I0313 12:52:16.622917 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"741a6830aaef63e92194dd05d0b4da3d","Type":"ContainerStarted","Data":"1b406ee46971e490792a19b63a98c585c578548f473b720d5b7cd5c729eda7ae"} Mar 13 12:52:16.622982 master-0 kubenswrapper[7518]: I0313 12:52:16.622950 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"741a6830aaef63e92194dd05d0b4da3d","Type":"ContainerStarted","Data":"ad6b6be249a4b35bc319cc0c698c9b937c8df08adaedc5da969d7d3c63154f97"} Mar 13 12:52:16.955973 master-0 kubenswrapper[7518]: I0313 12:52:16.955938 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:52:16.980227 master-0 kubenswrapper[7518]: I0313 12:52:16.978440 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.9784155759999997 podStartE2EDuration="2.978415576s" podCreationTimestamp="2026-03-13 12:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:52:16.650353638 +0000 UTC m=+891.283422825" watchObservedRunningTime="2026-03-13 12:52:16.978415576 +0000 UTC m=+891.611484763" Mar 13 12:52:17.062449 master-0 kubenswrapper[7518]: I0313 12:52:17.062270 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bc3825c8-8381-4d19-b482-e9499a72a700-var-lock\") pod \"bc3825c8-8381-4d19-b482-e9499a72a700\" (UID: \"bc3825c8-8381-4d19-b482-e9499a72a700\") " Mar 13 12:52:17.062449 master-0 kubenswrapper[7518]: I0313 12:52:17.062361 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc3825c8-8381-4d19-b482-e9499a72a700-kubelet-dir\") pod \"bc3825c8-8381-4d19-b482-e9499a72a700\" (UID: \"bc3825c8-8381-4d19-b482-e9499a72a700\") " Mar 13 12:52:17.062449 master-0 kubenswrapper[7518]: I0313 12:52:17.062414 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc3825c8-8381-4d19-b482-e9499a72a700-kube-api-access\") pod \"bc3825c8-8381-4d19-b482-e9499a72a700\" (UID: \"bc3825c8-8381-4d19-b482-e9499a72a700\") " Mar 13 12:52:17.063049 master-0 kubenswrapper[7518]: I0313 12:52:17.063013 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc3825c8-8381-4d19-b482-e9499a72a700-var-lock" (OuterVolumeSpecName: "var-lock") pod "bc3825c8-8381-4d19-b482-e9499a72a700" (UID: "bc3825c8-8381-4d19-b482-e9499a72a700"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:17.063332 master-0 kubenswrapper[7518]: I0313 12:52:17.063240 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc3825c8-8381-4d19-b482-e9499a72a700-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bc3825c8-8381-4d19-b482-e9499a72a700" (UID: "bc3825c8-8381-4d19-b482-e9499a72a700"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:17.068255 master-0 kubenswrapper[7518]: I0313 12:52:17.068207 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3825c8-8381-4d19-b482-e9499a72a700-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bc3825c8-8381-4d19-b482-e9499a72a700" (UID: "bc3825c8-8381-4d19-b482-e9499a72a700"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:52:17.164040 master-0 kubenswrapper[7518]: I0313 12:52:17.163920 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc3825c8-8381-4d19-b482-e9499a72a700-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:17.164040 master-0 kubenswrapper[7518]: I0313 12:52:17.164032 7518 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bc3825c8-8381-4d19-b482-e9499a72a700-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:17.164040 master-0 kubenswrapper[7518]: I0313 12:52:17.164053 7518 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc3825c8-8381-4d19-b482-e9499a72a700-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:17.630560 master-0 kubenswrapper[7518]: I0313 12:52:17.630510 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:52:17.630560 master-0 kubenswrapper[7518]: I0313 12:52:17.630515 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"bc3825c8-8381-4d19-b482-e9499a72a700","Type":"ContainerDied","Data":"8e1088e0df5495c11b184ce6c8248adb0411207dd090af8621e1253e288aee81"} Mar 13 12:52:17.630716 master-0 kubenswrapper[7518]: I0313 12:52:17.630584 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e1088e0df5495c11b184ce6c8248adb0411207dd090af8621e1253e288aee81" Mar 13 12:52:19.160355 master-0 kubenswrapper[7518]: I0313 12:52:19.160304 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_1068645c-59cb-46a1-a8fd-6e91a453e4f8/installer/0.log" Mar 13 12:52:19.160852 master-0 kubenswrapper[7518]: I0313 12:52:19.160397 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:52:19.297058 master-0 kubenswrapper[7518]: I0313 12:52:19.296998 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1068645c-59cb-46a1-a8fd-6e91a453e4f8-kubelet-dir\") pod \"1068645c-59cb-46a1-a8fd-6e91a453e4f8\" (UID: \"1068645c-59cb-46a1-a8fd-6e91a453e4f8\") " Mar 13 12:52:19.297287 master-0 kubenswrapper[7518]: I0313 12:52:19.297157 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1068645c-59cb-46a1-a8fd-6e91a453e4f8-kube-api-access\") pod \"1068645c-59cb-46a1-a8fd-6e91a453e4f8\" (UID: \"1068645c-59cb-46a1-a8fd-6e91a453e4f8\") " Mar 13 12:52:19.297287 master-0 kubenswrapper[7518]: I0313 12:52:19.297191 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1068645c-59cb-46a1-a8fd-6e91a453e4f8-var-lock\") pod \"1068645c-59cb-46a1-a8fd-6e91a453e4f8\" (UID: \"1068645c-59cb-46a1-a8fd-6e91a453e4f8\") " Mar 13 12:52:19.297287 master-0 kubenswrapper[7518]: I0313 12:52:19.297208 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1068645c-59cb-46a1-a8fd-6e91a453e4f8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1068645c-59cb-46a1-a8fd-6e91a453e4f8" (UID: "1068645c-59cb-46a1-a8fd-6e91a453e4f8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:19.297406 master-0 kubenswrapper[7518]: I0313 12:52:19.297325 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1068645c-59cb-46a1-a8fd-6e91a453e4f8-var-lock" (OuterVolumeSpecName: "var-lock") pod "1068645c-59cb-46a1-a8fd-6e91a453e4f8" (UID: "1068645c-59cb-46a1-a8fd-6e91a453e4f8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:19.297553 master-0 kubenswrapper[7518]: I0313 12:52:19.297524 7518 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1068645c-59cb-46a1-a8fd-6e91a453e4f8-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:19.297553 master-0 kubenswrapper[7518]: I0313 12:52:19.297547 7518 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1068645c-59cb-46a1-a8fd-6e91a453e4f8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:19.300247 master-0 kubenswrapper[7518]: I0313 12:52:19.300133 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1068645c-59cb-46a1-a8fd-6e91a453e4f8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1068645c-59cb-46a1-a8fd-6e91a453e4f8" (UID: "1068645c-59cb-46a1-a8fd-6e91a453e4f8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:52:19.401127 master-0 kubenswrapper[7518]: I0313 12:52:19.400938 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1068645c-59cb-46a1-a8fd-6e91a453e4f8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:19.645847 master-0 kubenswrapper[7518]: I0313 12:52:19.645797 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_1068645c-59cb-46a1-a8fd-6e91a453e4f8/installer/0.log" Mar 13 12:52:19.646102 master-0 kubenswrapper[7518]: I0313 12:52:19.645863 7518 generic.go:334] "Generic (PLEG): container finished" podID="1068645c-59cb-46a1-a8fd-6e91a453e4f8" containerID="a6c791da190986f00f4311e447074f11476893db89635c9eee711eeebe5edf41" exitCode=1 Mar 13 12:52:19.646102 master-0 kubenswrapper[7518]: I0313 12:52:19.645900 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"1068645c-59cb-46a1-a8fd-6e91a453e4f8","Type":"ContainerDied","Data":"a6c791da190986f00f4311e447074f11476893db89635c9eee711eeebe5edf41"} Mar 13 12:52:19.646102 master-0 kubenswrapper[7518]: I0313 12:52:19.645923 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:52:19.646102 master-0 kubenswrapper[7518]: I0313 12:52:19.645954 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"1068645c-59cb-46a1-a8fd-6e91a453e4f8","Type":"ContainerDied","Data":"87e792644fcb45b717c9edfbbb9b45c62b018d5b0446987b41db9c0835974fce"} Mar 13 12:52:19.646102 master-0 kubenswrapper[7518]: I0313 12:52:19.645979 7518 scope.go:117] "RemoveContainer" containerID="a6c791da190986f00f4311e447074f11476893db89635c9eee711eeebe5edf41" Mar 13 12:52:19.661861 master-0 kubenswrapper[7518]: I0313 12:52:19.661824 7518 scope.go:117] "RemoveContainer" containerID="a6c791da190986f00f4311e447074f11476893db89635c9eee711eeebe5edf41" Mar 13 12:52:19.663250 master-0 kubenswrapper[7518]: E0313 12:52:19.662778 7518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6c791da190986f00f4311e447074f11476893db89635c9eee711eeebe5edf41\": container with ID starting with a6c791da190986f00f4311e447074f11476893db89635c9eee711eeebe5edf41 not found: ID does not exist" containerID="a6c791da190986f00f4311e447074f11476893db89635c9eee711eeebe5edf41" Mar 13 12:52:19.668691 master-0 kubenswrapper[7518]: I0313 12:52:19.668605 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6c791da190986f00f4311e447074f11476893db89635c9eee711eeebe5edf41"} err="failed to get container status \"a6c791da190986f00f4311e447074f11476893db89635c9eee711eeebe5edf41\": rpc error: code = NotFound desc = could not find container \"a6c791da190986f00f4311e447074f11476893db89635c9eee711eeebe5edf41\": container with ID starting with a6c791da190986f00f4311e447074f11476893db89635c9eee711eeebe5edf41 not found: ID does not exist" Mar 13 12:52:19.680319 master-0 kubenswrapper[7518]: I0313 12:52:19.680233 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 12:52:19.692395 master-0 kubenswrapper[7518]: I0313 12:52:19.692335 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 12:52:21.613461 master-0 kubenswrapper[7518]: I0313 12:52:21.613379 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1068645c-59cb-46a1-a8fd-6e91a453e4f8" path="/var/lib/kubelet/pods/1068645c-59cb-46a1-a8fd-6e91a453e4f8/volumes" Mar 13 12:52:24.598936 master-0 kubenswrapper[7518]: I0313 12:52:24.598883 7518 scope.go:117] "RemoveContainer" containerID="c5dac29410c608c592ce2da4d646f5dae37752b356e4a615b5b9f8033e660a03" Mar 13 12:52:24.599468 master-0 kubenswrapper[7518]: E0313 12:52:24.599161 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-pjpn2_openshift-cluster-storage-operator(c642c18f-f960-4418-bcb7-df884f8f8ad5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" podUID="c642c18f-f960-4418-bcb7-df884f8f8ad5" Mar 13 12:52:24.860698 master-0 kubenswrapper[7518]: I0313 12:52:24.860573 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:24.860698 master-0 kubenswrapper[7518]: I0313 12:52:24.860623 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:24.860698 master-0 kubenswrapper[7518]: I0313 12:52:24.860638 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:24.860698 master-0 kubenswrapper[7518]: I0313 12:52:24.860651 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:24.867250 master-0 kubenswrapper[7518]: I0313 12:52:24.867217 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:24.867520 master-0 kubenswrapper[7518]: I0313 12:52:24.867501 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:25.628307 master-0 kubenswrapper[7518]: I0313 12:52:25.628230 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 12:52:25.769441 master-0 kubenswrapper[7518]: I0313 12:52:25.769400 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:25.769831 master-0 kubenswrapper[7518]: I0313 12:52:25.769806 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:25.823742 master-0 kubenswrapper[7518]: I0313 12:52:25.822528 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.822494572 podStartE2EDuration="822.494572ms" podCreationTimestamp="2026-03-13 12:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:52:25.820884639 +0000 UTC m=+900.453953826" watchObservedRunningTime="2026-03-13 12:52:25.822494572 +0000 UTC m=+900.455563759" Mar 13 12:52:33.919220 master-0 kubenswrapper[7518]: I0313 12:52:33.917102 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-qz88j"] Mar 13 12:52:33.919220 master-0 kubenswrapper[7518]: E0313 12:52:33.917484 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3825c8-8381-4d19-b482-e9499a72a700" containerName="installer" Mar 13 12:52:33.919220 master-0 kubenswrapper[7518]: I0313 12:52:33.917501 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3825c8-8381-4d19-b482-e9499a72a700" containerName="installer" Mar 13 12:52:33.919220 master-0 kubenswrapper[7518]: E0313 12:52:33.917529 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1068645c-59cb-46a1-a8fd-6e91a453e4f8" containerName="installer" Mar 13 12:52:33.919220 master-0 kubenswrapper[7518]: I0313 12:52:33.917535 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="1068645c-59cb-46a1-a8fd-6e91a453e4f8" containerName="installer" Mar 13 12:52:33.919220 master-0 kubenswrapper[7518]: I0313 12:52:33.917657 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="1068645c-59cb-46a1-a8fd-6e91a453e4f8" containerName="installer" Mar 13 12:52:33.919220 master-0 kubenswrapper[7518]: I0313 12:52:33.917671 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3825c8-8381-4d19-b482-e9499a72a700" containerName="installer" Mar 13 12:52:33.928167 master-0 kubenswrapper[7518]: I0313 12:52:33.921245 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" Mar 13 12:52:33.928167 master-0 kubenswrapper[7518]: I0313 12:52:33.924752 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-b9zx6" Mar 13 12:52:33.950533 master-0 kubenswrapper[7518]: I0313 12:52:33.950295 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-qz88j"] Mar 13 12:52:34.029166 master-0 kubenswrapper[7518]: I0313 12:52:34.026887 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/36ad5a83-5c32-4941-94e0-7af86ac5d462-webhook-certs\") pod \"multus-admission-controller-7769569c45-qz88j\" (UID: \"36ad5a83-5c32-4941-94e0-7af86ac5d462\") " pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" Mar 13 12:52:34.029166 master-0 kubenswrapper[7518]: I0313 12:52:34.027006 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqsh5\" (UniqueName: \"kubernetes.io/projected/36ad5a83-5c32-4941-94e0-7af86ac5d462-kube-api-access-mqsh5\") pod \"multus-admission-controller-7769569c45-qz88j\" (UID: \"36ad5a83-5c32-4941-94e0-7af86ac5d462\") " pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" Mar 13 12:52:34.129021 master-0 kubenswrapper[7518]: I0313 12:52:34.128950 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqsh5\" (UniqueName: \"kubernetes.io/projected/36ad5a83-5c32-4941-94e0-7af86ac5d462-kube-api-access-mqsh5\") pod \"multus-admission-controller-7769569c45-qz88j\" (UID: \"36ad5a83-5c32-4941-94e0-7af86ac5d462\") " pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" Mar 13 12:52:34.129482 master-0 kubenswrapper[7518]: I0313 12:52:34.129274 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/36ad5a83-5c32-4941-94e0-7af86ac5d462-webhook-certs\") pod \"multus-admission-controller-7769569c45-qz88j\" (UID: \"36ad5a83-5c32-4941-94e0-7af86ac5d462\") " pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" Mar 13 12:52:34.133116 master-0 kubenswrapper[7518]: I0313 12:52:34.133063 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/36ad5a83-5c32-4941-94e0-7af86ac5d462-webhook-certs\") pod \"multus-admission-controller-7769569c45-qz88j\" (UID: \"36ad5a83-5c32-4941-94e0-7af86ac5d462\") " pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" Mar 13 12:52:34.156112 master-0 kubenswrapper[7518]: I0313 12:52:34.156048 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqsh5\" (UniqueName: \"kubernetes.io/projected/36ad5a83-5c32-4941-94e0-7af86ac5d462-kube-api-access-mqsh5\") pod \"multus-admission-controller-7769569c45-qz88j\" (UID: \"36ad5a83-5c32-4941-94e0-7af86ac5d462\") " pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" Mar 13 12:52:34.256225 master-0 kubenswrapper[7518]: I0313 12:52:34.256085 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" Mar 13 12:52:34.729821 master-0 kubenswrapper[7518]: I0313 12:52:34.729781 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-qz88j"] Mar 13 12:52:34.732198 master-0 kubenswrapper[7518]: W0313 12:52:34.732115 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36ad5a83_5c32_4941_94e0_7af86ac5d462.slice/crio-c4aea2db722bdac5b7168c49e752c46da9432061c6c515522534eb8c4d6126b5 WatchSource:0}: Error finding container c4aea2db722bdac5b7168c49e752c46da9432061c6c515522534eb8c4d6126b5: Status 404 returned error can't find the container with id c4aea2db722bdac5b7168c49e752c46da9432061c6c515522534eb8c4d6126b5 Mar 13 12:52:34.833842 master-0 kubenswrapper[7518]: I0313 12:52:34.833780 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" event={"ID":"36ad5a83-5c32-4941-94e0-7af86ac5d462","Type":"ContainerStarted","Data":"c4aea2db722bdac5b7168c49e752c46da9432061c6c515522534eb8c4d6126b5"} Mar 13 12:52:35.850144 master-0 kubenswrapper[7518]: I0313 12:52:35.850039 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" event={"ID":"36ad5a83-5c32-4941-94e0-7af86ac5d462","Type":"ContainerStarted","Data":"d3f955ed07eb7000cc37a6b6a146e9b0dce8c1a18a06032ac8459b956910fb30"} Mar 13 12:52:35.850650 master-0 kubenswrapper[7518]: I0313 12:52:35.850132 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" event={"ID":"36ad5a83-5c32-4941-94e0-7af86ac5d462","Type":"ContainerStarted","Data":"275ca0d9c3fa8785eec7e4c69148d50c6cecf30b1c42bd4b165d5c5b0fa5d7cd"} Mar 13 12:52:35.870392 master-0 kubenswrapper[7518]: I0313 12:52:35.870310 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" podStartSLOduration=2.870291495 podStartE2EDuration="2.870291495s" podCreationTimestamp="2026-03-13 12:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:52:35.866263697 +0000 UTC m=+910.499332894" watchObservedRunningTime="2026-03-13 12:52:35.870291495 +0000 UTC m=+910.503360682" Mar 13 12:52:35.905954 master-0 kubenswrapper[7518]: I0313 12:52:35.905827 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-96gds"] Mar 13 12:52:35.906360 master-0 kubenswrapper[7518]: I0313 12:52:35.906117 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-96gds" podUID="4c0b18db-06ad-4d58-a353-f6fd96309dea" containerName="multus-admission-controller" containerID="cri-o://c79a1fdbba512b9f4f21a08ea7612d350b0579fc0951d1d8b0ae9fc5bc23fc15" gracePeriod=30 Mar 13 12:52:35.906672 master-0 kubenswrapper[7518]: I0313 12:52:35.906651 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-96gds" podUID="4c0b18db-06ad-4d58-a353-f6fd96309dea" containerName="kube-rbac-proxy" containerID="cri-o://9553cf75735bf17d389fb1088bd8b8e97d7600ab1818e3680fca777b7afeaa50" gracePeriod=30 Mar 13 12:52:36.857212 master-0 kubenswrapper[7518]: I0313 12:52:36.857150 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-96gds" event={"ID":"4c0b18db-06ad-4d58-a353-f6fd96309dea","Type":"ContainerDied","Data":"9553cf75735bf17d389fb1088bd8b8e97d7600ab1818e3680fca777b7afeaa50"} Mar 13 12:52:36.857212 master-0 kubenswrapper[7518]: I0313 12:52:36.857123 7518 generic.go:334] "Generic (PLEG): container finished" podID="4c0b18db-06ad-4d58-a353-f6fd96309dea" containerID="9553cf75735bf17d389fb1088bd8b8e97d7600ab1818e3680fca777b7afeaa50" exitCode=0 Mar 13 12:52:39.598828 master-0 kubenswrapper[7518]: I0313 12:52:39.598758 7518 scope.go:117] "RemoveContainer" containerID="c5dac29410c608c592ce2da4d646f5dae37752b356e4a615b5b9f8033e660a03" Mar 13 12:52:39.599494 master-0 kubenswrapper[7518]: E0313 12:52:39.599013 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-pjpn2_openshift-cluster-storage-operator(c642c18f-f960-4418-bcb7-df884f8f8ad5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" podUID="c642c18f-f960-4418-bcb7-df884f8f8ad5" Mar 13 12:52:40.827972 master-0 kubenswrapper[7518]: I0313 12:52:40.827922 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 13 12:52:40.845743 master-0 kubenswrapper[7518]: I0313 12:52:40.845693 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:52:40.851110 master-0 kubenswrapper[7518]: I0313 12:52:40.851057 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 13 12:52:40.854265 master-0 kubenswrapper[7518]: I0313 12:52:40.853121 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-lgm6s" Mar 13 12:52:40.854477 master-0 kubenswrapper[7518]: I0313 12:52:40.853222 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 12:52:41.028421 master-0 kubenswrapper[7518]: I0313 12:52:41.028359 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"1670a1d9-46a3-4d25-9dd1-43a08e2759c7\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:52:41.028656 master-0 kubenswrapper[7518]: I0313 12:52:41.028469 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-kube-api-access\") pod \"installer-5-master-0\" (UID: \"1670a1d9-46a3-4d25-9dd1-43a08e2759c7\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:52:41.028656 master-0 kubenswrapper[7518]: I0313 12:52:41.028533 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-var-lock\") pod \"installer-5-master-0\" (UID: \"1670a1d9-46a3-4d25-9dd1-43a08e2759c7\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:52:41.129432 master-0 kubenswrapper[7518]: I0313 12:52:41.129291 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"1670a1d9-46a3-4d25-9dd1-43a08e2759c7\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:52:41.129432 master-0 kubenswrapper[7518]: I0313 12:52:41.129393 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-kube-api-access\") pod \"installer-5-master-0\" (UID: \"1670a1d9-46a3-4d25-9dd1-43a08e2759c7\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:52:41.129675 master-0 kubenswrapper[7518]: I0313 12:52:41.129460 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"1670a1d9-46a3-4d25-9dd1-43a08e2759c7\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:52:41.129675 master-0 kubenswrapper[7518]: I0313 12:52:41.129621 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-var-lock\") pod \"installer-5-master-0\" (UID: \"1670a1d9-46a3-4d25-9dd1-43a08e2759c7\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:52:41.129777 master-0 kubenswrapper[7518]: I0313 12:52:41.129755 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-var-lock\") pod \"installer-5-master-0\" (UID: \"1670a1d9-46a3-4d25-9dd1-43a08e2759c7\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:52:41.145861 master-0 kubenswrapper[7518]: I0313 12:52:41.145809 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-kube-api-access\") pod \"installer-5-master-0\" (UID: \"1670a1d9-46a3-4d25-9dd1-43a08e2759c7\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:52:41.178542 master-0 kubenswrapper[7518]: I0313 12:52:41.178467 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:52:41.584719 master-0 kubenswrapper[7518]: I0313 12:52:41.584674 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 13 12:52:42.028210 master-0 kubenswrapper[7518]: I0313 12:52:42.028156 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1670a1d9-46a3-4d25-9dd1-43a08e2759c7","Type":"ContainerStarted","Data":"432ae93d7929ff1377b0a32b34b9fd0f282a4c2a377afc25391c4f66c1a92ec6"} Mar 13 12:52:43.036218 master-0 kubenswrapper[7518]: I0313 12:52:43.036114 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1670a1d9-46a3-4d25-9dd1-43a08e2759c7","Type":"ContainerStarted","Data":"a286539a5f3b6d8dbf769c0d114494a7685625beed846cc4c4f2272b91586aab"} Mar 13 12:52:43.053076 master-0 kubenswrapper[7518]: I0313 12:52:43.053010 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=3.052990008 podStartE2EDuration="3.052990008s" podCreationTimestamp="2026-03-13 12:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:52:43.051707114 +0000 UTC m=+917.684776301" watchObservedRunningTime="2026-03-13 12:52:43.052990008 +0000 UTC m=+917.686059195" Mar 13 12:52:49.876966 master-0 kubenswrapper[7518]: I0313 12:52:49.875956 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 12:52:49.876966 master-0 kubenswrapper[7518]: I0313 12:52:49.876835 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:52:49.879826 master-0 kubenswrapper[7518]: I0313 12:52:49.879791 7518 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-7mx4m" Mar 13 12:52:49.879953 master-0 kubenswrapper[7518]: I0313 12:52:49.879905 7518 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 12:52:49.888239 master-0 kubenswrapper[7518]: I0313 12:52:49.888093 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 12:52:49.926579 master-0 kubenswrapper[7518]: I0313 12:52:49.926513 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-kube-api-access\") pod \"installer-3-master-0\" (UID: \"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:52:49.926579 master-0 kubenswrapper[7518]: I0313 12:52:49.926580 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:52:49.926807 master-0 kubenswrapper[7518]: I0313 12:52:49.926598 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-var-lock\") pod \"installer-3-master-0\" (UID: \"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:52:50.027535 master-0 kubenswrapper[7518]: I0313 12:52:50.027461 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-kube-api-access\") pod \"installer-3-master-0\" (UID: \"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:52:50.027535 master-0 kubenswrapper[7518]: I0313 12:52:50.027546 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:52:50.027902 master-0 kubenswrapper[7518]: I0313 12:52:50.027568 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-var-lock\") pod \"installer-3-master-0\" (UID: \"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:52:50.027902 master-0 kubenswrapper[7518]: I0313 12:52:50.027627 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:52:50.027902 master-0 kubenswrapper[7518]: I0313 12:52:50.027684 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-var-lock\") pod \"installer-3-master-0\" (UID: \"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:52:50.042684 master-0 kubenswrapper[7518]: I0313 12:52:50.042644 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-kube-api-access\") pod \"installer-3-master-0\" (UID: \"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:52:50.201999 master-0 kubenswrapper[7518]: I0313 12:52:50.201877 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:52:50.602178 master-0 kubenswrapper[7518]: I0313 12:52:50.602101 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 12:52:50.879312 master-0 kubenswrapper[7518]: I0313 12:52:50.879265 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:52:51.103410 master-0 kubenswrapper[7518]: I0313 12:52:51.103352 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa","Type":"ContainerStarted","Data":"ac5bd7e9e9ade8981025308aaf718e0c330dc4308320062f39375e8cc91f1134"} Mar 13 12:52:51.103410 master-0 kubenswrapper[7518]: I0313 12:52:51.103394 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa","Type":"ContainerStarted","Data":"495a72687402da10550aa60f4b41a9bc310b020e43ddbbb5f831586412f05db8"} Mar 13 12:52:51.123888 master-0 kubenswrapper[7518]: I0313 12:52:51.123742 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.123721317 podStartE2EDuration="2.123721317s" podCreationTimestamp="2026-03-13 12:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:52:51.122464224 +0000 UTC m=+925.755533431" watchObservedRunningTime="2026-03-13 12:52:51.123721317 +0000 UTC m=+925.756790504" Mar 13 12:52:51.364904 master-0 kubenswrapper[7518]: I0313 12:52:51.364848 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 13 12:52:51.365289 master-0 kubenswrapper[7518]: I0313 12:52:51.365066 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-3-master-0" podUID="c1f6c3b0-411a-4553-a198-e684b49ec412" containerName="installer" containerID="cri-o://65f40a435518973d6c0909fbbcdd0cde8969ba4566e140a164b5d48f5e95d0de" gracePeriod=30 Mar 13 12:52:52.599426 master-0 kubenswrapper[7518]: I0313 12:52:52.599318 7518 scope.go:117] "RemoveContainer" containerID="c5dac29410c608c592ce2da4d646f5dae37752b356e4a615b5b9f8033e660a03" Mar 13 12:52:52.600355 master-0 kubenswrapper[7518]: E0313 12:52:52.599650 7518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-pjpn2_openshift-cluster-storage-operator(c642c18f-f960-4418-bcb7-df884f8f8ad5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" podUID="c642c18f-f960-4418-bcb7-df884f8f8ad5" Mar 13 12:52:54.537878 master-0 kubenswrapper[7518]: I0313 12:52:54.537773 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 13 12:52:54.539398 master-0 kubenswrapper[7518]: I0313 12:52:54.539340 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:52:54.561829 master-0 kubenswrapper[7518]: I0313 12:52:54.559991 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 13 12:52:54.696803 master-0 kubenswrapper[7518]: I0313 12:52:54.696717 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:52:54.696803 master-0 kubenswrapper[7518]: I0313 12:52:54.696805 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-var-lock\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:52:54.697080 master-0 kubenswrapper[7518]: I0313 12:52:54.696946 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:52:54.798498 master-0 kubenswrapper[7518]: I0313 12:52:54.798353 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:52:54.798498 master-0 kubenswrapper[7518]: I0313 12:52:54.798444 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:52:54.798782 master-0 kubenswrapper[7518]: I0313 12:52:54.798515 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-var-lock\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:52:54.798782 master-0 kubenswrapper[7518]: I0313 12:52:54.798551 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:52:54.798782 master-0 kubenswrapper[7518]: I0313 12:52:54.798694 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-var-lock\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:52:54.830246 master-0 kubenswrapper[7518]: I0313 12:52:54.829732 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:52:54.868047 master-0 kubenswrapper[7518]: I0313 12:52:54.867978 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:52:55.325993 master-0 kubenswrapper[7518]: I0313 12:52:55.325926 7518 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 13 12:52:55.333927 master-0 kubenswrapper[7518]: W0313 12:52:55.333872 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod185a10f7_2a4b_4171_b10d_4614cb8671bd.slice/crio-f8aea90deac8c57ee0f5fc4e46276af696e5807ecbb4598ad2a67ae2024be4b0 WatchSource:0}: Error finding container f8aea90deac8c57ee0f5fc4e46276af696e5807ecbb4598ad2a67ae2024be4b0: Status 404 returned error can't find the container with id f8aea90deac8c57ee0f5fc4e46276af696e5807ecbb4598ad2a67ae2024be4b0 Mar 13 12:52:56.153607 master-0 kubenswrapper[7518]: I0313 12:52:56.153561 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"185a10f7-2a4b-4171-b10d-4614cb8671bd","Type":"ContainerStarted","Data":"5cd4fe9ce3ca6e40b66f822008735eb91b0372a4e062d161fec91212083d1dbe"} Mar 13 12:52:56.154222 master-0 kubenswrapper[7518]: I0313 12:52:56.154202 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"185a10f7-2a4b-4171-b10d-4614cb8671bd","Type":"ContainerStarted","Data":"f8aea90deac8c57ee0f5fc4e46276af696e5807ecbb4598ad2a67ae2024be4b0"} Mar 13 12:52:56.176381 master-0 kubenswrapper[7518]: I0313 12:52:56.176299 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.176283055 podStartE2EDuration="2.176283055s" podCreationTimestamp="2026-03-13 12:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:52:56.169609797 +0000 UTC m=+930.802678984" watchObservedRunningTime="2026-03-13 12:52:56.176283055 +0000 UTC m=+930.809352232" Mar 13 12:52:58.172005 master-0 kubenswrapper[7518]: I0313 12:52:58.171955 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_c1f6c3b0-411a-4553-a198-e684b49ec412/installer/0.log" Mar 13 12:52:58.172005 master-0 kubenswrapper[7518]: I0313 12:52:58.172004 7518 generic.go:334] "Generic (PLEG): container finished" podID="c1f6c3b0-411a-4553-a198-e684b49ec412" containerID="65f40a435518973d6c0909fbbcdd0cde8969ba4566e140a164b5d48f5e95d0de" exitCode=1 Mar 13 12:52:58.172603 master-0 kubenswrapper[7518]: I0313 12:52:58.172035 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"c1f6c3b0-411a-4553-a198-e684b49ec412","Type":"ContainerDied","Data":"65f40a435518973d6c0909fbbcdd0cde8969ba4566e140a164b5d48f5e95d0de"} Mar 13 12:52:58.256742 master-0 kubenswrapper[7518]: I0313 12:52:58.256689 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_c1f6c3b0-411a-4553-a198-e684b49ec412/installer/0.log" Mar 13 12:52:58.257091 master-0 kubenswrapper[7518]: I0313 12:52:58.256766 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:52:58.301828 master-0 kubenswrapper[7518]: I0313 12:52:58.301769 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1f6c3b0-411a-4553-a198-e684b49ec412-var-lock\") pod \"c1f6c3b0-411a-4553-a198-e684b49ec412\" (UID: \"c1f6c3b0-411a-4553-a198-e684b49ec412\") " Mar 13 12:52:58.301828 master-0 kubenswrapper[7518]: I0313 12:52:58.301826 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1f6c3b0-411a-4553-a198-e684b49ec412-kubelet-dir\") pod \"c1f6c3b0-411a-4553-a198-e684b49ec412\" (UID: \"c1f6c3b0-411a-4553-a198-e684b49ec412\") " Mar 13 12:52:58.302078 master-0 kubenswrapper[7518]: I0313 12:52:58.301919 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f6c3b0-411a-4553-a198-e684b49ec412-var-lock" (OuterVolumeSpecName: "var-lock") pod "c1f6c3b0-411a-4553-a198-e684b49ec412" (UID: "c1f6c3b0-411a-4553-a198-e684b49ec412"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:58.302078 master-0 kubenswrapper[7518]: I0313 12:52:58.301984 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1f6c3b0-411a-4553-a198-e684b49ec412-kube-api-access\") pod \"c1f6c3b0-411a-4553-a198-e684b49ec412\" (UID: \"c1f6c3b0-411a-4553-a198-e684b49ec412\") " Mar 13 12:52:58.302078 master-0 kubenswrapper[7518]: I0313 12:52:58.302046 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f6c3b0-411a-4553-a198-e684b49ec412-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c1f6c3b0-411a-4553-a198-e684b49ec412" (UID: "c1f6c3b0-411a-4553-a198-e684b49ec412"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:58.302259 master-0 kubenswrapper[7518]: I0313 12:52:58.302239 7518 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1f6c3b0-411a-4553-a198-e684b49ec412-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:58.302259 master-0 kubenswrapper[7518]: I0313 12:52:58.302254 7518 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1f6c3b0-411a-4553-a198-e684b49ec412-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:58.305382 master-0 kubenswrapper[7518]: I0313 12:52:58.305346 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f6c3b0-411a-4553-a198-e684b49ec412-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c1f6c3b0-411a-4553-a198-e684b49ec412" (UID: "c1f6c3b0-411a-4553-a198-e684b49ec412"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:52:58.402988 master-0 kubenswrapper[7518]: I0313 12:52:58.402903 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1f6c3b0-411a-4553-a198-e684b49ec412-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:59.186490 master-0 kubenswrapper[7518]: I0313 12:52:59.186395 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_c1f6c3b0-411a-4553-a198-e684b49ec412/installer/0.log" Mar 13 12:52:59.187581 master-0 kubenswrapper[7518]: I0313 12:52:59.186503 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"c1f6c3b0-411a-4553-a198-e684b49ec412","Type":"ContainerDied","Data":"6f0d31b7d2a8b09b9acea28b33add1196f7dc351da9eee1d6abf47a178142184"} Mar 13 12:52:59.187581 master-0 kubenswrapper[7518]: I0313 12:52:59.186565 7518 scope.go:117] "RemoveContainer" containerID="65f40a435518973d6c0909fbbcdd0cde8969ba4566e140a164b5d48f5e95d0de" Mar 13 12:52:59.187581 master-0 kubenswrapper[7518]: I0313 12:52:59.186577 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:52:59.239895 master-0 kubenswrapper[7518]: I0313 12:52:59.239819 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 13 12:52:59.246616 master-0 kubenswrapper[7518]: I0313 12:52:59.246564 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 13 12:52:59.610004 master-0 kubenswrapper[7518]: I0313 12:52:59.609942 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1f6c3b0-411a-4553-a198-e684b49ec412" path="/var/lib/kubelet/pods/c1f6c3b0-411a-4553-a198-e684b49ec412/volumes" Mar 13 12:53:00.198603 master-0 kubenswrapper[7518]: I0313 12:53:00.198504 7518 generic.go:334] "Generic (PLEG): container finished" podID="45925a5e-41ae-4c19-b586-3151c7677612" containerID="f4c4c4e5602a184f824d2367e7178507d9196d2b340284307f9055d03b447109" exitCode=0 Mar 13 12:53:00.199177 master-0 kubenswrapper[7518]: I0313 12:53:00.198615 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" event={"ID":"45925a5e-41ae-4c19-b586-3151c7677612","Type":"ContainerDied","Data":"f4c4c4e5602a184f824d2367e7178507d9196d2b340284307f9055d03b447109"} Mar 13 12:53:00.199177 master-0 kubenswrapper[7518]: I0313 12:53:00.198681 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" event={"ID":"45925a5e-41ae-4c19-b586-3151c7677612","Type":"ContainerStarted","Data":"893bd44fe8f8e5d4020dc723d3e3fcab66acbac94263f9ab54194b1817013a01"} Mar 13 12:53:00.199177 master-0 kubenswrapper[7518]: I0313 12:53:00.198713 7518 scope.go:117] "RemoveContainer" containerID="c4f835c09db11145ad2a4fe25a302845b3cf71bff631c2bae9c2d15853a5abe8" Mar 13 12:53:00.539390 master-0 kubenswrapper[7518]: I0313 12:53:00.539307 7518 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:53:00.547496 master-0 kubenswrapper[7518]: I0313 12:53:00.547421 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:00.547496 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:00.547496 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:00.547496 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:00.547842 master-0 kubenswrapper[7518]: I0313 12:53:00.547537 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:01.541512 master-0 kubenswrapper[7518]: I0313 12:53:01.541452 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:01.541512 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:01.541512 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:01.541512 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:01.542595 master-0 kubenswrapper[7518]: I0313 12:53:01.542257 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:02.541933 master-0 kubenswrapper[7518]: I0313 12:53:02.541798 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:02.541933 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:02.541933 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:02.541933 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:02.541933 master-0 kubenswrapper[7518]: I0313 12:53:02.541925 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:03.543420 master-0 kubenswrapper[7518]: I0313 12:53:03.543349 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:03.543420 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:03.543420 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:03.543420 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:03.544800 master-0 kubenswrapper[7518]: I0313 12:53:03.543434 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:03.598844 master-0 kubenswrapper[7518]: I0313 12:53:03.598755 7518 scope.go:117] "RemoveContainer" containerID="c5dac29410c608c592ce2da4d646f5dae37752b356e4a615b5b9f8033e660a03" Mar 13 12:53:04.234032 master-0 kubenswrapper[7518]: I0313 12:53:04.233988 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/4.log" Mar 13 12:53:04.234364 master-0 kubenswrapper[7518]: I0313 12:53:04.234048 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" event={"ID":"c642c18f-f960-4418-bcb7-df884f8f8ad5","Type":"ContainerStarted","Data":"9dcaa5253b805625ce01a026605f12a83a746a48b538a6fdf65b49c6d4f6b41c"} Mar 13 12:53:04.541486 master-0 kubenswrapper[7518]: I0313 12:53:04.541417 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:04.541486 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:04.541486 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:04.541486 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:04.541486 master-0 kubenswrapper[7518]: I0313 12:53:04.541483 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:05.542493 master-0 kubenswrapper[7518]: I0313 12:53:05.542390 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:05.542493 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:05.542493 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:05.542493 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:05.543357 master-0 kubenswrapper[7518]: I0313 12:53:05.542532 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:06.251046 master-0 kubenswrapper[7518]: I0313 12:53:06.251006 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-96gds_4c0b18db-06ad-4d58-a353-f6fd96309dea/multus-admission-controller/0.log" Mar 13 12:53:06.251255 master-0 kubenswrapper[7518]: I0313 12:53:06.251059 7518 generic.go:334] "Generic (PLEG): container finished" podID="4c0b18db-06ad-4d58-a353-f6fd96309dea" containerID="c79a1fdbba512b9f4f21a08ea7612d350b0579fc0951d1d8b0ae9fc5bc23fc15" exitCode=137 Mar 13 12:53:06.251255 master-0 kubenswrapper[7518]: I0313 12:53:06.251092 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-96gds" event={"ID":"4c0b18db-06ad-4d58-a353-f6fd96309dea","Type":"ContainerDied","Data":"c79a1fdbba512b9f4f21a08ea7612d350b0579fc0951d1d8b0ae9fc5bc23fc15"} Mar 13 12:53:06.251255 master-0 kubenswrapper[7518]: I0313 12:53:06.251130 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-96gds" event={"ID":"4c0b18db-06ad-4d58-a353-f6fd96309dea","Type":"ContainerDied","Data":"0fdc23a018e70f12d64abda9b21166b71dd0a6e62a76a56d6fb711404d01a3e9"} Mar 13 12:53:06.251255 master-0 kubenswrapper[7518]: I0313 12:53:06.251161 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fdc23a018e70f12d64abda9b21166b71dd0a6e62a76a56d6fb711404d01a3e9" Mar 13 12:53:06.258351 master-0 kubenswrapper[7518]: I0313 12:53:06.258333 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-96gds_4c0b18db-06ad-4d58-a353-f6fd96309dea/multus-admission-controller/0.log" Mar 13 12:53:06.258432 master-0 kubenswrapper[7518]: I0313 12:53:06.258387 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:53:06.406481 master-0 kubenswrapper[7518]: I0313 12:53:06.406392 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9psfn\" (UniqueName: \"kubernetes.io/projected/4c0b18db-06ad-4d58-a353-f6fd96309dea-kube-api-access-9psfn\") pod \"4c0b18db-06ad-4d58-a353-f6fd96309dea\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " Mar 13 12:53:06.406711 master-0 kubenswrapper[7518]: I0313 12:53:06.406557 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") pod \"4c0b18db-06ad-4d58-a353-f6fd96309dea\" (UID: \"4c0b18db-06ad-4d58-a353-f6fd96309dea\") " Mar 13 12:53:06.409662 master-0 kubenswrapper[7518]: I0313 12:53:06.409591 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c0b18db-06ad-4d58-a353-f6fd96309dea-kube-api-access-9psfn" (OuterVolumeSpecName: "kube-api-access-9psfn") pod "4c0b18db-06ad-4d58-a353-f6fd96309dea" (UID: "4c0b18db-06ad-4d58-a353-f6fd96309dea"). InnerVolumeSpecName "kube-api-access-9psfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:53:06.410842 master-0 kubenswrapper[7518]: I0313 12:53:06.410772 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "4c0b18db-06ad-4d58-a353-f6fd96309dea" (UID: "4c0b18db-06ad-4d58-a353-f6fd96309dea"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:53:06.508199 master-0 kubenswrapper[7518]: I0313 12:53:06.508042 7518 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c0b18db-06ad-4d58-a353-f6fd96309dea-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:06.508199 master-0 kubenswrapper[7518]: I0313 12:53:06.508100 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9psfn\" (UniqueName: \"kubernetes.io/projected/4c0b18db-06ad-4d58-a353-f6fd96309dea-kube-api-access-9psfn\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:06.542630 master-0 kubenswrapper[7518]: I0313 12:53:06.542548 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:06.542630 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:06.542630 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:06.542630 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:06.542630 master-0 kubenswrapper[7518]: I0313 12:53:06.542606 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:07.269299 master-0 kubenswrapper[7518]: I0313 12:53:07.269249 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-96gds" Mar 13 12:53:07.304183 master-0 kubenswrapper[7518]: I0313 12:53:07.304108 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-96gds"] Mar 13 12:53:07.309663 master-0 kubenswrapper[7518]: I0313 12:53:07.309612 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-96gds"] Mar 13 12:53:07.542410 master-0 kubenswrapper[7518]: I0313 12:53:07.542277 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:07.542410 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:07.542410 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:07.542410 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:07.542673 master-0 kubenswrapper[7518]: I0313 12:53:07.542386 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:07.615118 master-0 kubenswrapper[7518]: I0313 12:53:07.614956 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c0b18db-06ad-4d58-a353-f6fd96309dea" path="/var/lib/kubelet/pods/4c0b18db-06ad-4d58-a353-f6fd96309dea/volumes" Mar 13 12:53:08.541713 master-0 kubenswrapper[7518]: I0313 12:53:08.541647 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:08.541713 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:08.541713 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:08.541713 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:08.542072 master-0 kubenswrapper[7518]: I0313 12:53:08.541719 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:09.540016 master-0 kubenswrapper[7518]: I0313 12:53:09.539958 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:53:09.542697 master-0 kubenswrapper[7518]: I0313 12:53:09.542657 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:09.542697 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:09.542697 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:09.542697 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:09.543144 master-0 kubenswrapper[7518]: I0313 12:53:09.543094 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:10.542101 master-0 kubenswrapper[7518]: I0313 12:53:10.542026 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:10.542101 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:10.542101 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:10.542101 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:10.543283 master-0 kubenswrapper[7518]: I0313 12:53:10.542175 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:11.543060 master-0 kubenswrapper[7518]: I0313 12:53:11.542973 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:11.543060 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:11.543060 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:11.543060 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:11.544024 master-0 kubenswrapper[7518]: I0313 12:53:11.543072 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:12.542411 master-0 kubenswrapper[7518]: I0313 12:53:12.542315 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:12.542411 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:12.542411 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:12.542411 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:12.542870 master-0 kubenswrapper[7518]: I0313 12:53:12.542425 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:13.459911 master-0 kubenswrapper[7518]: I0313 12:53:13.459837 7518 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 12:53:13.460480 master-0 kubenswrapper[7518]: I0313 12:53:13.460125 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" containerID="cri-o://161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092" gracePeriod=30 Mar 13 12:53:13.460480 master-0 kubenswrapper[7518]: I0313 12:53:13.460224 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-recovery-controller" containerID="cri-o://749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f" gracePeriod=30 Mar 13 12:53:13.460480 master-0 kubenswrapper[7518]: I0313 12:53:13.460246 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" containerID="cri-o://4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173" gracePeriod=30 Mar 13 12:53:13.463120 master-0 kubenswrapper[7518]: I0313 12:53:13.463037 7518 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 12:53:13.463822 master-0 kubenswrapper[7518]: E0313 12:53:13.463755 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="wait-for-host-port" Mar 13 12:53:13.463883 master-0 kubenswrapper[7518]: I0313 12:53:13.463821 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="wait-for-host-port" Mar 13 12:53:13.463974 master-0 kubenswrapper[7518]: E0313 12:53:13.463881 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" Mar 13 12:53:13.463974 master-0 kubenswrapper[7518]: I0313 12:53:13.463898 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" Mar 13 12:53:13.464051 master-0 kubenswrapper[7518]: E0313 12:53:13.463980 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" Mar 13 12:53:13.464051 master-0 kubenswrapper[7518]: I0313 12:53:13.463997 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" Mar 13 12:53:13.464051 master-0 kubenswrapper[7518]: E0313 12:53:13.464020 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f6c3b0-411a-4553-a198-e684b49ec412" containerName="installer" Mar 13 12:53:13.465767 master-0 kubenswrapper[7518]: I0313 12:53:13.464064 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f6c3b0-411a-4553-a198-e684b49ec412" containerName="installer" Mar 13 12:53:13.465767 master-0 kubenswrapper[7518]: E0313 12:53:13.464098 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-recovery-controller" Mar 13 12:53:13.465767 master-0 kubenswrapper[7518]: I0313 12:53:13.464109 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-recovery-controller" Mar 13 12:53:13.465767 master-0 kubenswrapper[7518]: E0313 12:53:13.464176 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0b18db-06ad-4d58-a353-f6fd96309dea" containerName="kube-rbac-proxy" Mar 13 12:53:13.465767 master-0 kubenswrapper[7518]: I0313 12:53:13.464190 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0b18db-06ad-4d58-a353-f6fd96309dea" containerName="kube-rbac-proxy" Mar 13 12:53:13.465767 master-0 kubenswrapper[7518]: E0313 12:53:13.464245 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0b18db-06ad-4d58-a353-f6fd96309dea" containerName="multus-admission-controller" Mar 13 12:53:13.465767 master-0 kubenswrapper[7518]: I0313 12:53:13.464259 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0b18db-06ad-4d58-a353-f6fd96309dea" containerName="multus-admission-controller" Mar 13 12:53:13.466022 master-0 kubenswrapper[7518]: I0313 12:53:13.465868 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c0b18db-06ad-4d58-a353-f6fd96309dea" containerName="multus-admission-controller" Mar 13 12:53:13.466022 master-0 kubenswrapper[7518]: I0313 12:53:13.465934 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-recovery-controller" Mar 13 12:53:13.466022 master-0 kubenswrapper[7518]: I0313 12:53:13.465955 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f6c3b0-411a-4553-a198-e684b49ec412" containerName="installer" Mar 13 12:53:13.466022 master-0 kubenswrapper[7518]: I0313 12:53:13.466003 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c0b18db-06ad-4d58-a353-f6fd96309dea" containerName="kube-rbac-proxy" Mar 13 12:53:13.466131 master-0 kubenswrapper[7518]: I0313 12:53:13.466027 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" Mar 13 12:53:13.466131 master-0 kubenswrapper[7518]: I0313 12:53:13.466072 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" Mar 13 12:53:13.541459 master-0 kubenswrapper[7518]: I0313 12:53:13.541391 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:13.541459 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:13.541459 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:13.541459 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:13.541748 master-0 kubenswrapper[7518]: I0313 12:53:13.541464 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:13.611027 master-0 kubenswrapper[7518]: I0313 12:53:13.610973 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:13.611383 master-0 kubenswrapper[7518]: I0313 12:53:13.611345 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:13.629399 master-0 kubenswrapper[7518]: I0313 12:53:13.629347 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-cert-syncer/0.log" Mar 13 12:53:13.630375 master-0 kubenswrapper[7518]: I0313 12:53:13.630337 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:13.637884 master-0 kubenswrapper[7518]: I0313 12:53:13.637740 7518 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="1d3d45b6ce1b3764f9927e623a71adf8" podUID="1453f6461bf5d599ad65a4656343ee91" Mar 13 12:53:13.713105 master-0 kubenswrapper[7518]: I0313 12:53:13.712920 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:13.713341 master-0 kubenswrapper[7518]: I0313 12:53:13.713223 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:13.713341 master-0 kubenswrapper[7518]: I0313 12:53:13.713293 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:13.713424 master-0 kubenswrapper[7518]: I0313 12:53:13.713362 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:13.814733 master-0 kubenswrapper[7518]: I0313 12:53:13.814435 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"1d3d45b6ce1b3764f9927e623a71adf8\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " Mar 13 12:53:13.814733 master-0 kubenswrapper[7518]: I0313 12:53:13.814562 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"1d3d45b6ce1b3764f9927e623a71adf8\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " Mar 13 12:53:13.814733 master-0 kubenswrapper[7518]: I0313 12:53:13.814582 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "1d3d45b6ce1b3764f9927e623a71adf8" (UID: "1d3d45b6ce1b3764f9927e623a71adf8"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:13.814733 master-0 kubenswrapper[7518]: I0313 12:53:13.814692 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "1d3d45b6ce1b3764f9927e623a71adf8" (UID: "1d3d45b6ce1b3764f9927e623a71adf8"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:13.815040 master-0 kubenswrapper[7518]: I0313 12:53:13.814841 7518 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:13.815040 master-0 kubenswrapper[7518]: I0313 12:53:13.814861 7518 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:14.322206 master-0 kubenswrapper[7518]: I0313 12:53:14.320324 7518 generic.go:334] "Generic (PLEG): container finished" podID="1670a1d9-46a3-4d25-9dd1-43a08e2759c7" containerID="a286539a5f3b6d8dbf769c0d114494a7685625beed846cc4c4f2272b91586aab" exitCode=0 Mar 13 12:53:14.322206 master-0 kubenswrapper[7518]: I0313 12:53:14.320464 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1670a1d9-46a3-4d25-9dd1-43a08e2759c7","Type":"ContainerDied","Data":"a286539a5f3b6d8dbf769c0d114494a7685625beed846cc4c4f2272b91586aab"} Mar 13 12:53:14.323865 master-0 kubenswrapper[7518]: I0313 12:53:14.323829 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-cert-syncer/0.log" Mar 13 12:53:14.325182 master-0 kubenswrapper[7518]: I0313 12:53:14.325122 7518 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f" exitCode=0 Mar 13 12:53:14.325182 master-0 kubenswrapper[7518]: I0313 12:53:14.325179 7518 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173" exitCode=2 Mar 13 12:53:14.325301 master-0 kubenswrapper[7518]: I0313 12:53:14.325187 7518 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092" exitCode=0 Mar 13 12:53:14.325301 master-0 kubenswrapper[7518]: I0313 12:53:14.325251 7518 scope.go:117] "RemoveContainer" containerID="749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f" Mar 13 12:53:14.325441 master-0 kubenswrapper[7518]: I0313 12:53:14.325412 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:14.353259 master-0 kubenswrapper[7518]: I0313 12:53:14.353215 7518 scope.go:117] "RemoveContainer" containerID="4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173" Mar 13 12:53:14.354977 master-0 kubenswrapper[7518]: I0313 12:53:14.354930 7518 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="1d3d45b6ce1b3764f9927e623a71adf8" podUID="1453f6461bf5d599ad65a4656343ee91" Mar 13 12:53:14.364615 master-0 kubenswrapper[7518]: I0313 12:53:14.364566 7518 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="1d3d45b6ce1b3764f9927e623a71adf8" podUID="1453f6461bf5d599ad65a4656343ee91" Mar 13 12:53:14.373754 master-0 kubenswrapper[7518]: I0313 12:53:14.373724 7518 scope.go:117] "RemoveContainer" containerID="161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092" Mar 13 12:53:14.393209 master-0 kubenswrapper[7518]: I0313 12:53:14.393168 7518 scope.go:117] "RemoveContainer" containerID="99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe" Mar 13 12:53:14.408074 master-0 kubenswrapper[7518]: I0313 12:53:14.408035 7518 scope.go:117] "RemoveContainer" containerID="749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f" Mar 13 12:53:14.408700 master-0 kubenswrapper[7518]: E0313 12:53:14.408529 7518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f\": container with ID starting with 749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f not found: ID does not exist" containerID="749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f" Mar 13 12:53:14.408700 master-0 kubenswrapper[7518]: I0313 12:53:14.408564 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f"} err="failed to get container status \"749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f\": rpc error: code = NotFound desc = could not find container \"749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f\": container with ID starting with 749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f not found: ID does not exist" Mar 13 12:53:14.408700 master-0 kubenswrapper[7518]: I0313 12:53:14.408590 7518 scope.go:117] "RemoveContainer" containerID="4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173" Mar 13 12:53:14.409110 master-0 kubenswrapper[7518]: E0313 12:53:14.409070 7518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173\": container with ID starting with 4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173 not found: ID does not exist" containerID="4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173" Mar 13 12:53:14.409234 master-0 kubenswrapper[7518]: I0313 12:53:14.409105 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173"} err="failed to get container status \"4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173\": rpc error: code = NotFound desc = could not find container \"4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173\": container with ID starting with 4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173 not found: ID does not exist" Mar 13 12:53:14.409234 master-0 kubenswrapper[7518]: I0313 12:53:14.409128 7518 scope.go:117] "RemoveContainer" containerID="161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092" Mar 13 12:53:14.409414 master-0 kubenswrapper[7518]: E0313 12:53:14.409386 7518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092\": container with ID starting with 161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092 not found: ID does not exist" containerID="161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092" Mar 13 12:53:14.409465 master-0 kubenswrapper[7518]: I0313 12:53:14.409419 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092"} err="failed to get container status \"161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092\": rpc error: code = NotFound desc = could not find container \"161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092\": container with ID starting with 161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092 not found: ID does not exist" Mar 13 12:53:14.409465 master-0 kubenswrapper[7518]: I0313 12:53:14.409448 7518 scope.go:117] "RemoveContainer" containerID="99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe" Mar 13 12:53:14.409770 master-0 kubenswrapper[7518]: E0313 12:53:14.409741 7518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe\": container with ID starting with 99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe not found: ID does not exist" containerID="99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe" Mar 13 12:53:14.409826 master-0 kubenswrapper[7518]: I0313 12:53:14.409768 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe"} err="failed to get container status \"99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe\": rpc error: code = NotFound desc = could not find container \"99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe\": container with ID starting with 99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe not found: ID does not exist" Mar 13 12:53:14.409826 master-0 kubenswrapper[7518]: I0313 12:53:14.409785 7518 scope.go:117] "RemoveContainer" containerID="749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f" Mar 13 12:53:14.410040 master-0 kubenswrapper[7518]: I0313 12:53:14.410003 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f"} err="failed to get container status \"749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f\": rpc error: code = NotFound desc = could not find container \"749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f\": container with ID starting with 749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f not found: ID does not exist" Mar 13 12:53:14.410040 master-0 kubenswrapper[7518]: I0313 12:53:14.410032 7518 scope.go:117] "RemoveContainer" containerID="4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173" Mar 13 12:53:14.410291 master-0 kubenswrapper[7518]: I0313 12:53:14.410255 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173"} err="failed to get container status \"4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173\": rpc error: code = NotFound desc = could not find container \"4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173\": container with ID starting with 4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173 not found: ID does not exist" Mar 13 12:53:14.410291 master-0 kubenswrapper[7518]: I0313 12:53:14.410283 7518 scope.go:117] "RemoveContainer" containerID="161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092" Mar 13 12:53:14.410726 master-0 kubenswrapper[7518]: I0313 12:53:14.410690 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092"} err="failed to get container status \"161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092\": rpc error: code = NotFound desc = could not find container \"161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092\": container with ID starting with 161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092 not found: ID does not exist" Mar 13 12:53:14.410726 master-0 kubenswrapper[7518]: I0313 12:53:14.410716 7518 scope.go:117] "RemoveContainer" containerID="99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe" Mar 13 12:53:14.410964 master-0 kubenswrapper[7518]: I0313 12:53:14.410929 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe"} err="failed to get container status \"99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe\": rpc error: code = NotFound desc = could not find container \"99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe\": container with ID starting with 99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe not found: ID does not exist" Mar 13 12:53:14.410964 master-0 kubenswrapper[7518]: I0313 12:53:14.410956 7518 scope.go:117] "RemoveContainer" containerID="749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f" Mar 13 12:53:14.411286 master-0 kubenswrapper[7518]: I0313 12:53:14.411257 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f"} err="failed to get container status \"749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f\": rpc error: code = NotFound desc = could not find container \"749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f\": container with ID starting with 749f270368b7664256f6cdcccc583a1b9f48d3556ae13c311239ddf410797b9f not found: ID does not exist" Mar 13 12:53:14.411286 master-0 kubenswrapper[7518]: I0313 12:53:14.411281 7518 scope.go:117] "RemoveContainer" containerID="4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173" Mar 13 12:53:14.411592 master-0 kubenswrapper[7518]: I0313 12:53:14.411561 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173"} err="failed to get container status \"4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173\": rpc error: code = NotFound desc = could not find container \"4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173\": container with ID starting with 4ed0d6bdd25c3a628bbe00eefea5be982aa9c7f4f1c14c257f7f949dd48fd173 not found: ID does not exist" Mar 13 12:53:14.411592 master-0 kubenswrapper[7518]: I0313 12:53:14.411586 7518 scope.go:117] "RemoveContainer" containerID="161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092" Mar 13 12:53:14.411823 master-0 kubenswrapper[7518]: I0313 12:53:14.411785 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092"} err="failed to get container status \"161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092\": rpc error: code = NotFound desc = could not find container \"161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092\": container with ID starting with 161f4ac8ebbd0b62feb8e2f6594eea574edaf4e2547fb7b8335c22584c734092 not found: ID does not exist" Mar 13 12:53:14.411823 master-0 kubenswrapper[7518]: I0313 12:53:14.411817 7518 scope.go:117] "RemoveContainer" containerID="99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe" Mar 13 12:53:14.412117 master-0 kubenswrapper[7518]: I0313 12:53:14.412088 7518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe"} err="failed to get container status \"99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe\": rpc error: code = NotFound desc = could not find container \"99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe\": container with ID starting with 99f60a9917159f67e21cff877f3b3713d431c9f87ce5701cead4b373893dbcfe not found: ID does not exist" Mar 13 12:53:14.541167 master-0 kubenswrapper[7518]: I0313 12:53:14.541096 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:14.541167 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:14.541167 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:14.541167 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:14.541730 master-0 kubenswrapper[7518]: I0313 12:53:14.541185 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:15.543419 master-0 kubenswrapper[7518]: I0313 12:53:15.543351 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:15.543419 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:15.543419 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:15.543419 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:15.544617 master-0 kubenswrapper[7518]: I0313 12:53:15.543424 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:15.608036 master-0 kubenswrapper[7518]: I0313 12:53:15.607855 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d3d45b6ce1b3764f9927e623a71adf8" path="/var/lib/kubelet/pods/1d3d45b6ce1b3764f9927e623a71adf8/volumes" Mar 13 12:53:15.669412 master-0 kubenswrapper[7518]: I0313 12:53:15.669367 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:53:15.741303 master-0 kubenswrapper[7518]: I0313 12:53:15.741227 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-var-lock\") pod \"1670a1d9-46a3-4d25-9dd1-43a08e2759c7\" (UID: \"1670a1d9-46a3-4d25-9dd1-43a08e2759c7\") " Mar 13 12:53:15.741841 master-0 kubenswrapper[7518]: I0313 12:53:15.741336 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-kube-api-access\") pod \"1670a1d9-46a3-4d25-9dd1-43a08e2759c7\" (UID: \"1670a1d9-46a3-4d25-9dd1-43a08e2759c7\") " Mar 13 12:53:15.741841 master-0 kubenswrapper[7518]: I0313 12:53:15.741368 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-kubelet-dir\") pod \"1670a1d9-46a3-4d25-9dd1-43a08e2759c7\" (UID: \"1670a1d9-46a3-4d25-9dd1-43a08e2759c7\") " Mar 13 12:53:15.742094 master-0 kubenswrapper[7518]: I0313 12:53:15.742045 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1670a1d9-46a3-4d25-9dd1-43a08e2759c7" (UID: "1670a1d9-46a3-4d25-9dd1-43a08e2759c7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:15.742211 master-0 kubenswrapper[7518]: I0313 12:53:15.742103 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-var-lock" (OuterVolumeSpecName: "var-lock") pod "1670a1d9-46a3-4d25-9dd1-43a08e2759c7" (UID: "1670a1d9-46a3-4d25-9dd1-43a08e2759c7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:15.744760 master-0 kubenswrapper[7518]: I0313 12:53:15.744716 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1670a1d9-46a3-4d25-9dd1-43a08e2759c7" (UID: "1670a1d9-46a3-4d25-9dd1-43a08e2759c7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:53:15.842712 master-0 kubenswrapper[7518]: I0313 12:53:15.842661 7518 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:15.842712 master-0 kubenswrapper[7518]: I0313 12:53:15.842689 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:15.842712 master-0 kubenswrapper[7518]: I0313 12:53:15.842699 7518 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1670a1d9-46a3-4d25-9dd1-43a08e2759c7-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:16.341033 master-0 kubenswrapper[7518]: I0313 12:53:16.340845 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1670a1d9-46a3-4d25-9dd1-43a08e2759c7","Type":"ContainerDied","Data":"432ae93d7929ff1377b0a32b34b9fd0f282a4c2a377afc25391c4f66c1a92ec6"} Mar 13 12:53:16.341033 master-0 kubenswrapper[7518]: I0313 12:53:16.340888 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:53:16.341033 master-0 kubenswrapper[7518]: I0313 12:53:16.340925 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="432ae93d7929ff1377b0a32b34b9fd0f282a4c2a377afc25391c4f66c1a92ec6" Mar 13 12:53:16.543524 master-0 kubenswrapper[7518]: I0313 12:53:16.543453 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:16.543524 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:16.543524 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:16.543524 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:16.544062 master-0 kubenswrapper[7518]: I0313 12:53:16.543526 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:17.543266 master-0 kubenswrapper[7518]: I0313 12:53:17.543162 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:17.543266 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:17.543266 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:17.543266 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:17.543266 master-0 kubenswrapper[7518]: I0313 12:53:17.543242 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:18.542026 master-0 kubenswrapper[7518]: I0313 12:53:18.541957 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:18.542026 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:18.542026 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:18.542026 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:18.542375 master-0 kubenswrapper[7518]: I0313 12:53:18.542046 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:19.542854 master-0 kubenswrapper[7518]: I0313 12:53:19.542795 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:19.542854 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:19.542854 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:19.542854 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:19.543631 master-0 kubenswrapper[7518]: I0313 12:53:19.542893 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:20.541769 master-0 kubenswrapper[7518]: I0313 12:53:20.541704 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:20.541769 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:20.541769 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:20.541769 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:20.542073 master-0 kubenswrapper[7518]: I0313 12:53:20.541784 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:21.541541 master-0 kubenswrapper[7518]: I0313 12:53:21.541486 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:21.541541 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:21.541541 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:21.541541 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:21.542328 master-0 kubenswrapper[7518]: I0313 12:53:21.541585 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:22.541805 master-0 kubenswrapper[7518]: I0313 12:53:22.541753 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:22.541805 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:22.541805 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:22.541805 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:22.542519 master-0 kubenswrapper[7518]: I0313 12:53:22.542490 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:23.542494 master-0 kubenswrapper[7518]: I0313 12:53:23.542426 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:23.542494 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:23.542494 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:23.542494 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:23.542494 master-0 kubenswrapper[7518]: I0313 12:53:23.542493 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:23.659416 master-0 kubenswrapper[7518]: I0313 12:53:23.659366 7518 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:53:23.659703 master-0 kubenswrapper[7518]: I0313 12:53:23.659651 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager" containerID="cri-o://45b191ee613240af89dae5f40970afaf7896448c3e2a3a3165bd85645b5d7288" gracePeriod=30 Mar 13 12:53:23.660223 master-0 kubenswrapper[7518]: I0313 12:53:23.659728 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://ad6b6be249a4b35bc319cc0c698c9b937c8df08adaedc5da969d7d3c63154f97" gracePeriod=30 Mar 13 12:53:23.660223 master-0 kubenswrapper[7518]: I0313 12:53:23.659749 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://1b406ee46971e490792a19b63a98c585c578548f473b720d5b7cd5c729eda7ae" gracePeriod=30 Mar 13 12:53:23.660223 master-0 kubenswrapper[7518]: I0313 12:53:23.659743 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="cluster-policy-controller" containerID="cri-o://52372f90f3e518110cf1e64b9ff43ecce31d8c11b62d3766c284ad38e957707b" gracePeriod=30 Mar 13 12:53:23.663572 master-0 kubenswrapper[7518]: I0313 12:53:23.663529 7518 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:53:23.663855 master-0 kubenswrapper[7518]: E0313 12:53:23.663829 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1670a1d9-46a3-4d25-9dd1-43a08e2759c7" containerName="installer" Mar 13 12:53:23.663855 master-0 kubenswrapper[7518]: I0313 12:53:23.663851 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="1670a1d9-46a3-4d25-9dd1-43a08e2759c7" containerName="installer" Mar 13 12:53:23.671704 master-0 kubenswrapper[7518]: E0313 12:53:23.663879 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager-recovery-controller" Mar 13 12:53:23.671704 master-0 kubenswrapper[7518]: I0313 12:53:23.663910 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager-recovery-controller" Mar 13 12:53:23.671704 master-0 kubenswrapper[7518]: E0313 12:53:23.663928 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager" Mar 13 12:53:23.671704 master-0 kubenswrapper[7518]: I0313 12:53:23.663935 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager" Mar 13 12:53:23.671704 master-0 kubenswrapper[7518]: E0313 12:53:23.663948 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="cluster-policy-controller" Mar 13 12:53:23.671704 master-0 kubenswrapper[7518]: I0313 12:53:23.663962 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="cluster-policy-controller" Mar 13 12:53:23.671704 master-0 kubenswrapper[7518]: E0313 12:53:23.663988 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager-cert-syncer" Mar 13 12:53:23.671704 master-0 kubenswrapper[7518]: I0313 12:53:23.663997 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager-cert-syncer" Mar 13 12:53:23.671704 master-0 kubenswrapper[7518]: I0313 12:53:23.664188 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="cluster-policy-controller" Mar 13 12:53:23.671704 master-0 kubenswrapper[7518]: I0313 12:53:23.664213 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager-cert-syncer" Mar 13 12:53:23.671704 master-0 kubenswrapper[7518]: I0313 12:53:23.664233 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager" Mar 13 12:53:23.671704 master-0 kubenswrapper[7518]: I0313 12:53:23.664247 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager-recovery-controller" Mar 13 12:53:23.671704 master-0 kubenswrapper[7518]: I0313 12:53:23.664256 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="1670a1d9-46a3-4d25-9dd1-43a08e2759c7" containerName="installer" Mar 13 12:53:23.752787 master-0 kubenswrapper[7518]: I0313 12:53:23.752648 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9b24fda1c2e55a08607764d7b9b24355\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:23.752931 master-0 kubenswrapper[7518]: I0313 12:53:23.752819 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9b24fda1c2e55a08607764d7b9b24355\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:23.853415 master-0 kubenswrapper[7518]: I0313 12:53:23.853356 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9b24fda1c2e55a08607764d7b9b24355\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:23.853622 master-0 kubenswrapper[7518]: I0313 12:53:23.853446 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9b24fda1c2e55a08607764d7b9b24355\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:23.853622 master-0 kubenswrapper[7518]: I0313 12:53:23.853533 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9b24fda1c2e55a08607764d7b9b24355\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:23.853622 master-0 kubenswrapper[7518]: I0313 12:53:23.853581 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9b24fda1c2e55a08607764d7b9b24355\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:23.862058 master-0 kubenswrapper[7518]: I0313 12:53:23.862007 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_741a6830aaef63e92194dd05d0b4da3d/kube-controller-manager-cert-syncer/0.log" Mar 13 12:53:23.862925 master-0 kubenswrapper[7518]: I0313 12:53:23.862888 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:23.866977 master-0 kubenswrapper[7518]: I0313 12:53:23.866899 7518 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="741a6830aaef63e92194dd05d0b4da3d" podUID="9b24fda1c2e55a08607764d7b9b24355" Mar 13 12:53:23.954388 master-0 kubenswrapper[7518]: I0313 12:53:23.954287 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/741a6830aaef63e92194dd05d0b4da3d-resource-dir\") pod \"741a6830aaef63e92194dd05d0b4da3d\" (UID: \"741a6830aaef63e92194dd05d0b4da3d\") " Mar 13 12:53:23.954640 master-0 kubenswrapper[7518]: I0313 12:53:23.954430 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/741a6830aaef63e92194dd05d0b4da3d-cert-dir\") pod \"741a6830aaef63e92194dd05d0b4da3d\" (UID: \"741a6830aaef63e92194dd05d0b4da3d\") " Mar 13 12:53:23.954640 master-0 kubenswrapper[7518]: I0313 12:53:23.954468 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/741a6830aaef63e92194dd05d0b4da3d-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "741a6830aaef63e92194dd05d0b4da3d" (UID: "741a6830aaef63e92194dd05d0b4da3d"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:23.954640 master-0 kubenswrapper[7518]: I0313 12:53:23.954503 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/741a6830aaef63e92194dd05d0b4da3d-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "741a6830aaef63e92194dd05d0b4da3d" (UID: "741a6830aaef63e92194dd05d0b4da3d"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:23.954738 master-0 kubenswrapper[7518]: I0313 12:53:23.954683 7518 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/741a6830aaef63e92194dd05d0b4da3d-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:23.954738 master-0 kubenswrapper[7518]: I0313 12:53:23.954700 7518 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/741a6830aaef63e92194dd05d0b4da3d-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:24.394476 master-0 kubenswrapper[7518]: I0313 12:53:24.394416 7518 generic.go:334] "Generic (PLEG): container finished" podID="76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa" containerID="ac5bd7e9e9ade8981025308aaf718e0c330dc4308320062f39375e8cc91f1134" exitCode=0 Mar 13 12:53:24.394714 master-0 kubenswrapper[7518]: I0313 12:53:24.394515 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa","Type":"ContainerDied","Data":"ac5bd7e9e9ade8981025308aaf718e0c330dc4308320062f39375e8cc91f1134"} Mar 13 12:53:24.396708 master-0 kubenswrapper[7518]: I0313 12:53:24.396683 7518 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_741a6830aaef63e92194dd05d0b4da3d/kube-controller-manager-cert-syncer/0.log" Mar 13 12:53:24.397611 master-0 kubenswrapper[7518]: I0313 12:53:24.397583 7518 generic.go:334] "Generic (PLEG): container finished" podID="741a6830aaef63e92194dd05d0b4da3d" containerID="1b406ee46971e490792a19b63a98c585c578548f473b720d5b7cd5c729eda7ae" exitCode=0 Mar 13 12:53:24.397611 master-0 kubenswrapper[7518]: I0313 12:53:24.397608 7518 generic.go:334] "Generic (PLEG): container finished" podID="741a6830aaef63e92194dd05d0b4da3d" containerID="ad6b6be249a4b35bc319cc0c698c9b937c8df08adaedc5da969d7d3c63154f97" exitCode=2 Mar 13 12:53:24.397712 master-0 kubenswrapper[7518]: I0313 12:53:24.397616 7518 generic.go:334] "Generic (PLEG): container finished" podID="741a6830aaef63e92194dd05d0b4da3d" containerID="52372f90f3e518110cf1e64b9ff43ecce31d8c11b62d3766c284ad38e957707b" exitCode=0 Mar 13 12:53:24.397712 master-0 kubenswrapper[7518]: I0313 12:53:24.397635 7518 generic.go:334] "Generic (PLEG): container finished" podID="741a6830aaef63e92194dd05d0b4da3d" containerID="45b191ee613240af89dae5f40970afaf7896448c3e2a3a3165bd85645b5d7288" exitCode=0 Mar 13 12:53:24.397712 master-0 kubenswrapper[7518]: I0313 12:53:24.397661 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:24.397712 master-0 kubenswrapper[7518]: I0313 12:53:24.397678 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdff0a0b2aea82bac9a3ab64499e43b6fe8e459f15bf1c50fed1c0bf1762fda9" Mar 13 12:53:24.416605 master-0 kubenswrapper[7518]: I0313 12:53:24.416555 7518 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="741a6830aaef63e92194dd05d0b4da3d" podUID="9b24fda1c2e55a08607764d7b9b24355" Mar 13 12:53:24.426765 master-0 kubenswrapper[7518]: I0313 12:53:24.426712 7518 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="741a6830aaef63e92194dd05d0b4da3d" podUID="9b24fda1c2e55a08607764d7b9b24355" Mar 13 12:53:24.542511 master-0 kubenswrapper[7518]: I0313 12:53:24.542437 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:24.542511 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:24.542511 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:24.542511 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:24.543000 master-0 kubenswrapper[7518]: I0313 12:53:24.542536 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:25.545453 master-0 kubenswrapper[7518]: I0313 12:53:25.545396 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:25.545453 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:25.545453 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:25.545453 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:25.546063 master-0 kubenswrapper[7518]: I0313 12:53:25.545506 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:25.605081 master-0 kubenswrapper[7518]: I0313 12:53:25.605005 7518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="741a6830aaef63e92194dd05d0b4da3d" path="/var/lib/kubelet/pods/741a6830aaef63e92194dd05d0b4da3d/volumes" Mar 13 12:53:25.668747 master-0 kubenswrapper[7518]: I0313 12:53:25.668623 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:53:25.950213 master-0 kubenswrapper[7518]: I0313 12:53:25.950083 7518 scope.go:117] "RemoveContainer" containerID="9553cf75735bf17d389fb1088bd8b8e97d7600ab1818e3680fca777b7afeaa50" Mar 13 12:53:25.966184 master-0 kubenswrapper[7518]: I0313 12:53:25.966114 7518 scope.go:117] "RemoveContainer" containerID="33dc3b8e25f77fb05b589ec8e3e510dade539a78b8f7492825619e6eaad51fe9" Mar 13 12:53:25.980169 master-0 kubenswrapper[7518]: I0313 12:53:25.980074 7518 scope.go:117] "RemoveContainer" containerID="c79a1fdbba512b9f4f21a08ea7612d350b0579fc0951d1d8b0ae9fc5bc23fc15" Mar 13 12:53:26.015870 master-0 kubenswrapper[7518]: I0313 12:53:26.015811 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-var-lock\") pod \"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa\" (UID: \"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa\") " Mar 13 12:53:26.015991 master-0 kubenswrapper[7518]: I0313 12:53:26.015874 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-kubelet-dir\") pod \"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa\" (UID: \"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa\") " Mar 13 12:53:26.015991 master-0 kubenswrapper[7518]: I0313 12:53:26.015919 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-kube-api-access\") pod \"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa\" (UID: \"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa\") " Mar 13 12:53:26.016089 master-0 kubenswrapper[7518]: I0313 12:53:26.016003 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-var-lock" (OuterVolumeSpecName: "var-lock") pod "76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa" (UID: "76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:26.016211 master-0 kubenswrapper[7518]: I0313 12:53:26.016083 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa" (UID: "76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:26.016269 master-0 kubenswrapper[7518]: I0313 12:53:26.016242 7518 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:26.018725 master-0 kubenswrapper[7518]: I0313 12:53:26.018688 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa" (UID: "76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:53:26.118298 master-0 kubenswrapper[7518]: I0313 12:53:26.118211 7518 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:26.118298 master-0 kubenswrapper[7518]: I0313 12:53:26.118283 7518 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:26.412123 master-0 kubenswrapper[7518]: I0313 12:53:26.412075 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa","Type":"ContainerDied","Data":"495a72687402da10550aa60f4b41a9bc310b020e43ddbbb5f831586412f05db8"} Mar 13 12:53:26.412123 master-0 kubenswrapper[7518]: I0313 12:53:26.412117 7518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="495a72687402da10550aa60f4b41a9bc310b020e43ddbbb5f831586412f05db8" Mar 13 12:53:26.412462 master-0 kubenswrapper[7518]: I0313 12:53:26.412439 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:53:26.542300 master-0 kubenswrapper[7518]: I0313 12:53:26.542217 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:26.542300 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:26.542300 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:26.542300 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:26.542300 master-0 kubenswrapper[7518]: I0313 12:53:26.542280 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:27.541943 master-0 kubenswrapper[7518]: I0313 12:53:27.541857 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:27.541943 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:27.541943 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:27.541943 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:27.542543 master-0 kubenswrapper[7518]: I0313 12:53:27.541952 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:27.597888 master-0 kubenswrapper[7518]: I0313 12:53:27.597782 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:27.620385 master-0 kubenswrapper[7518]: I0313 12:53:27.620336 7518 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="71074686-2d39-425b-a4ac-cd1d9769c8f3" Mar 13 12:53:27.620528 master-0 kubenswrapper[7518]: I0313 12:53:27.620400 7518 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="71074686-2d39-425b-a4ac-cd1d9769c8f3" Mar 13 12:53:27.637704 master-0 kubenswrapper[7518]: I0313 12:53:27.637628 7518 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:27.641411 master-0 kubenswrapper[7518]: I0313 12:53:27.641348 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 12:53:27.647627 master-0 kubenswrapper[7518]: I0313 12:53:27.647554 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 12:53:27.655950 master-0 kubenswrapper[7518]: I0313 12:53:27.655900 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:27.665828 master-0 kubenswrapper[7518]: I0313 12:53:27.665759 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 12:53:27.683719 master-0 kubenswrapper[7518]: W0313 12:53:27.683629 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1453f6461bf5d599ad65a4656343ee91.slice/crio-2a520ce1540e4505903e0c09b3c7ff382c5a6347945280110eeacb275245a884 WatchSource:0}: Error finding container 2a520ce1540e4505903e0c09b3c7ff382c5a6347945280110eeacb275245a884: Status 404 returned error can't find the container with id 2a520ce1540e4505903e0c09b3c7ff382c5a6347945280110eeacb275245a884 Mar 13 12:53:28.431780 master-0 kubenswrapper[7518]: I0313 12:53:28.431698 7518 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="65d8ab343a6c8c9cdae0b29379d80db7bbdfeeeb082bcdc9935f85db242121e8" exitCode=0 Mar 13 12:53:28.432013 master-0 kubenswrapper[7518]: I0313 12:53:28.431813 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerDied","Data":"65d8ab343a6c8c9cdae0b29379d80db7bbdfeeeb082bcdc9935f85db242121e8"} Mar 13 12:53:28.432013 master-0 kubenswrapper[7518]: I0313 12:53:28.431931 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"2a520ce1540e4505903e0c09b3c7ff382c5a6347945280110eeacb275245a884"} Mar 13 12:53:28.541998 master-0 kubenswrapper[7518]: I0313 12:53:28.541940 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:28.541998 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:28.541998 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:28.541998 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:28.542683 master-0 kubenswrapper[7518]: I0313 12:53:28.542003 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:29.443202 master-0 kubenswrapper[7518]: I0313 12:53:29.443019 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"9e6d07d04707c83d5d761b1f7ed58474303d364667db54ae899df77b8c71b52d"} Mar 13 12:53:29.443202 master-0 kubenswrapper[7518]: I0313 12:53:29.443097 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"d787d856f0918a254ce3c937e9007cce5d60df73e45e63a2b9e3c69dda9b0e44"} Mar 13 12:53:29.443202 master-0 kubenswrapper[7518]: I0313 12:53:29.443126 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"b5e0746e4832ff55bf614aa770ddd19a9a9fc08ca7f1ca173dc0718a80c8990d"} Mar 13 12:53:29.443475 master-0 kubenswrapper[7518]: I0313 12:53:29.443233 7518 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:29.467286 master-0 kubenswrapper[7518]: I0313 12:53:29.467206 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.46717215 podStartE2EDuration="2.46717215s" podCreationTimestamp="2026-03-13 12:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:53:29.463463412 +0000 UTC m=+964.096532599" watchObservedRunningTime="2026-03-13 12:53:29.46717215 +0000 UTC m=+964.100241357" Mar 13 12:53:29.541421 master-0 kubenswrapper[7518]: I0313 12:53:29.541353 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:29.541421 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:29.541421 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:29.541421 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:29.541868 master-0 kubenswrapper[7518]: I0313 12:53:29.541428 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:30.541936 master-0 kubenswrapper[7518]: I0313 12:53:30.541864 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:30.541936 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:30.541936 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:30.541936 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:30.542547 master-0 kubenswrapper[7518]: I0313 12:53:30.541978 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:31.541808 master-0 kubenswrapper[7518]: I0313 12:53:31.541739 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:31.541808 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:31.541808 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:31.541808 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:31.542643 master-0 kubenswrapper[7518]: I0313 12:53:31.541839 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:32.541851 master-0 kubenswrapper[7518]: I0313 12:53:32.541805 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:32.541851 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:32.541851 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:32.541851 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:32.542524 master-0 kubenswrapper[7518]: I0313 12:53:32.542489 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:33.543158 master-0 kubenswrapper[7518]: I0313 12:53:33.543061 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:33.543158 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:33.543158 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:33.543158 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:33.544024 master-0 kubenswrapper[7518]: I0313 12:53:33.543178 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:34.541886 master-0 kubenswrapper[7518]: I0313 12:53:34.541823 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:34.541886 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:34.541886 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:34.541886 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:34.542378 master-0 kubenswrapper[7518]: I0313 12:53:34.541904 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:35.542902 master-0 kubenswrapper[7518]: I0313 12:53:35.542802 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:35.542902 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:35.542902 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:35.542902 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:35.542902 master-0 kubenswrapper[7518]: I0313 12:53:35.542903 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:36.542860 master-0 kubenswrapper[7518]: I0313 12:53:36.542743 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:36.542860 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:36.542860 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:36.542860 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:36.543918 master-0 kubenswrapper[7518]: I0313 12:53:36.542899 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:37.541574 master-0 kubenswrapper[7518]: I0313 12:53:37.541489 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:37.541574 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:37.541574 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:37.541574 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:37.541574 master-0 kubenswrapper[7518]: I0313 12:53:37.541571 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:38.541277 master-0 kubenswrapper[7518]: I0313 12:53:38.541172 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:38.541277 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:38.541277 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:38.541277 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:38.541277 master-0 kubenswrapper[7518]: I0313 12:53:38.541265 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:38.598291 master-0 kubenswrapper[7518]: I0313 12:53:38.598228 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:38.618169 master-0 kubenswrapper[7518]: I0313 12:53:38.618099 7518 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="14da54f4-8610-46f2-84b1-eeab7480676c" Mar 13 12:53:38.618169 master-0 kubenswrapper[7518]: I0313 12:53:38.618155 7518 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="14da54f4-8610-46f2-84b1-eeab7480676c" Mar 13 12:53:38.635002 master-0 kubenswrapper[7518]: I0313 12:53:38.634940 7518 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:53:38.644116 master-0 kubenswrapper[7518]: I0313 12:53:38.640551 7518 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:38.646728 master-0 kubenswrapper[7518]: I0313 12:53:38.646662 7518 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:53:38.659277 master-0 kubenswrapper[7518]: I0313 12:53:38.658199 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:38.662296 master-0 kubenswrapper[7518]: I0313 12:53:38.662248 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:53:38.686616 master-0 kubenswrapper[7518]: W0313 12:53:38.686565 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b24fda1c2e55a08607764d7b9b24355.slice/crio-2b2ef2ddaedb81fecd10454e7de227fc33e0631466b7f1d7f0c388f2e1883f04 WatchSource:0}: Error finding container 2b2ef2ddaedb81fecd10454e7de227fc33e0631466b7f1d7f0c388f2e1883f04: Status 404 returned error can't find the container with id 2b2ef2ddaedb81fecd10454e7de227fc33e0631466b7f1d7f0c388f2e1883f04 Mar 13 12:53:39.529843 master-0 kubenswrapper[7518]: I0313 12:53:39.529763 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9b24fda1c2e55a08607764d7b9b24355","Type":"ContainerStarted","Data":"5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f"} Mar 13 12:53:39.529843 master-0 kubenswrapper[7518]: I0313 12:53:39.529828 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9b24fda1c2e55a08607764d7b9b24355","Type":"ContainerStarted","Data":"ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c"} Mar 13 12:53:39.529843 master-0 kubenswrapper[7518]: I0313 12:53:39.529852 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9b24fda1c2e55a08607764d7b9b24355","Type":"ContainerStarted","Data":"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11"} Mar 13 12:53:39.530162 master-0 kubenswrapper[7518]: I0313 12:53:39.529867 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9b24fda1c2e55a08607764d7b9b24355","Type":"ContainerStarted","Data":"2b2ef2ddaedb81fecd10454e7de227fc33e0631466b7f1d7f0c388f2e1883f04"} Mar 13 12:53:39.541885 master-0 kubenswrapper[7518]: I0313 12:53:39.541837 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:39.541885 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:39.541885 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:39.541885 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:39.542382 master-0 kubenswrapper[7518]: I0313 12:53:39.541892 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:40.542218 master-0 kubenswrapper[7518]: I0313 12:53:40.542173 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:40.542218 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:40.542218 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:40.542218 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:40.542786 master-0 kubenswrapper[7518]: I0313 12:53:40.542233 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:40.545318 master-0 kubenswrapper[7518]: I0313 12:53:40.545293 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9b24fda1c2e55a08607764d7b9b24355","Type":"ContainerStarted","Data":"552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451"} Mar 13 12:53:40.570663 master-0 kubenswrapper[7518]: I0313 12:53:40.570578 7518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.570556145 podStartE2EDuration="2.570556145s" podCreationTimestamp="2026-03-13 12:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:53:40.569434382 +0000 UTC m=+975.202503619" watchObservedRunningTime="2026-03-13 12:53:40.570556145 +0000 UTC m=+975.203625332" Mar 13 12:53:41.542972 master-0 kubenswrapper[7518]: I0313 12:53:41.542910 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:41.542972 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:41.542972 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:41.542972 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:41.544458 master-0 kubenswrapper[7518]: I0313 12:53:41.544411 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:42.543220 master-0 kubenswrapper[7518]: I0313 12:53:42.543147 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:42.543220 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:42.543220 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:42.543220 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:42.543753 master-0 kubenswrapper[7518]: I0313 12:53:42.543228 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:43.471545 master-0 kubenswrapper[7518]: I0313 12:53:43.471481 7518 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:53:43.471877 master-0 kubenswrapper[7518]: E0313 12:53:43.471852 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa" containerName="installer" Mar 13 12:53:43.471877 master-0 kubenswrapper[7518]: I0313 12:53:43.471876 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa" containerName="installer" Mar 13 12:53:43.472012 master-0 kubenswrapper[7518]: I0313 12:53:43.471992 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa" containerName="installer" Mar 13 12:53:43.472589 master-0 kubenswrapper[7518]: I0313 12:53:43.472556 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.508893 master-0 kubenswrapper[7518]: I0313 12:53:43.508831 7518 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:53:43.516891 master-0 kubenswrapper[7518]: I0313 12:53:43.516822 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.517075 master-0 kubenswrapper[7518]: I0313 12:53:43.516909 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.517075 master-0 kubenswrapper[7518]: I0313 12:53:43.516953 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.517075 master-0 kubenswrapper[7518]: I0313 12:53:43.517010 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.517075 master-0 kubenswrapper[7518]: I0313 12:53:43.517053 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.541945 master-0 kubenswrapper[7518]: I0313 12:53:43.541871 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:43.541945 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:43.541945 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:43.541945 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:43.542269 master-0 kubenswrapper[7518]: I0313 12:53:43.541988 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:43.543586 master-0 kubenswrapper[7518]: I0313 12:53:43.543525 7518 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 12:53:43.544202 master-0 kubenswrapper[7518]: I0313 12:53:43.544111 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://838f1203bfc2909f5be268d039e5903c4aada457bcd573b0395f4215bfc0c446" gracePeriod=15 Mar 13 12:53:43.545131 master-0 kubenswrapper[7518]: I0313 12:53:43.544112 7518 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" containerID="cri-o://f3be2171b1690f9bafcc889e55d83ff1a441baaed77d90117edebfc3db8ff2b9" gracePeriod=15 Mar 13 12:53:43.545131 master-0 kubenswrapper[7518]: I0313 12:53:43.544939 7518 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:53:43.545256 master-0 kubenswrapper[7518]: E0313 12:53:43.545216 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 12:53:43.545256 master-0 kubenswrapper[7518]: I0313 12:53:43.545232 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 12:53:43.545256 master-0 kubenswrapper[7518]: E0313 12:53:43.545244 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 12:53:43.545256 master-0 kubenswrapper[7518]: I0313 12:53:43.545252 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 12:53:43.545392 master-0 kubenswrapper[7518]: E0313 12:53:43.545270 7518 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 12:53:43.545392 master-0 kubenswrapper[7518]: I0313 12:53:43.545276 7518 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 12:53:43.547324 master-0 kubenswrapper[7518]: I0313 12:53:43.545462 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 12:53:43.547324 master-0 kubenswrapper[7518]: I0313 12:53:43.545492 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 12:53:43.547324 master-0 kubenswrapper[7518]: I0313 12:53:43.545520 7518 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 12:53:43.547756 master-0 kubenswrapper[7518]: I0313 12:53:43.547732 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:43.598878 master-0 kubenswrapper[7518]: E0313 12:53:43.598309 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:43.618147 master-0 kubenswrapper[7518]: I0313 12:53:43.618024 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.618412 master-0 kubenswrapper[7518]: I0313 12:53:43.618198 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.618412 master-0 kubenswrapper[7518]: I0313 12:53:43.618238 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.618624 master-0 kubenswrapper[7518]: I0313 12:53:43.618571 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:43.618832 master-0 kubenswrapper[7518]: I0313 12:53:43.618760 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.619178 master-0 kubenswrapper[7518]: I0313 12:53:43.619124 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.619243 master-0 kubenswrapper[7518]: I0313 12:53:43.619219 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.619378 master-0 kubenswrapper[7518]: I0313 12:53:43.619336 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.619436 master-0 kubenswrapper[7518]: I0313 12:53:43.619385 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.619436 master-0 kubenswrapper[7518]: I0313 12:53:43.619421 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:43.619514 master-0 kubenswrapper[7518]: I0313 12:53:43.619458 7518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:43.619554 master-0 kubenswrapper[7518]: I0313 12:53:43.619542 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.619654 master-0 kubenswrapper[7518]: I0313 12:53:43.619625 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.721501 master-0 kubenswrapper[7518]: I0313 12:53:43.721382 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:43.721886 master-0 kubenswrapper[7518]: I0313 12:53:43.721601 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:43.721886 master-0 kubenswrapper[7518]: I0313 12:53:43.721661 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:43.721886 master-0 kubenswrapper[7518]: I0313 12:53:43.721774 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:43.721886 master-0 kubenswrapper[7518]: I0313 12:53:43.721864 7518 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:43.722314 master-0 kubenswrapper[7518]: I0313 12:53:43.721941 7518 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:43.805683 master-0 kubenswrapper[7518]: I0313 12:53:43.805595 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:43.828583 master-0 kubenswrapper[7518]: W0313 12:53:43.828523 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a18cac8a90d6913a6a0391d805cddc9.slice/crio-b046991449e1d420ea17d254f8c05faec355e4aacc147507b98a3f095fa7ff11 WatchSource:0}: Error finding container b046991449e1d420ea17d254f8c05faec355e4aacc147507b98a3f095fa7ff11: Status 404 returned error can't find the container with id b046991449e1d420ea17d254f8c05faec355e4aacc147507b98a3f095fa7ff11 Mar 13 12:53:43.832249 master-0 kubenswrapper[7518]: E0313 12:53:43.832110 7518 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c67bf1ecee68e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:3a18cac8a90d6913a6a0391d805cddc9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:53:43.830423182 +0000 UTC m=+978.463492369,LastTimestamp:2026-03-13 12:53:43.830423182 +0000 UTC m=+978.463492369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:53:43.899335 master-0 kubenswrapper[7518]: I0313 12:53:43.899262 7518 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:43.936957 master-0 kubenswrapper[7518]: W0313 12:53:43.936910 7518 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48512e02022680c9d90092634f0fc146.slice/crio-dc74469df6e780c8e9e2827ef289651444a1ff65c5b17d5937b4448f9addb191 WatchSource:0}: Error finding container dc74469df6e780c8e9e2827ef289651444a1ff65c5b17d5937b4448f9addb191: Status 404 returned error can't find the container with id dc74469df6e780c8e9e2827ef289651444a1ff65c5b17d5937b4448f9addb191 Mar 13 12:53:44.541429 master-0 kubenswrapper[7518]: I0313 12:53:44.541371 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:44.541429 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:44.541429 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:44.541429 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:44.541801 master-0 kubenswrapper[7518]: I0313 12:53:44.541434 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:44.589972 master-0 kubenswrapper[7518]: I0313 12:53:44.589905 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"4ea900f27c90a68c3b8cd2345d580f77e20ef846c8a749fe70f5724228e5cc04"} Mar 13 12:53:44.589972 master-0 kubenswrapper[7518]: I0313 12:53:44.589966 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"b046991449e1d420ea17d254f8c05faec355e4aacc147507b98a3f095fa7ff11"} Mar 13 12:53:44.591158 master-0 kubenswrapper[7518]: I0313 12:53:44.591100 7518 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:53:44.592410 master-0 kubenswrapper[7518]: I0313 12:53:44.592384 7518 generic.go:334] "Generic (PLEG): container finished" podID="185a10f7-2a4b-4171-b10d-4614cb8671bd" containerID="5cd4fe9ce3ca6e40b66f822008735eb91b0372a4e062d161fec91212083d1dbe" exitCode=0 Mar 13 12:53:44.592490 master-0 kubenswrapper[7518]: I0313 12:53:44.592433 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"185a10f7-2a4b-4171-b10d-4614cb8671bd","Type":"ContainerDied","Data":"5cd4fe9ce3ca6e40b66f822008735eb91b0372a4e062d161fec91212083d1dbe"} Mar 13 12:53:44.593448 master-0 kubenswrapper[7518]: I0313 12:53:44.593414 7518 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:53:44.594184 master-0 kubenswrapper[7518]: I0313 12:53:44.594129 7518 status_manager.go:851] "Failed to get status for pod" podUID="185a10f7-2a4b-4171-b10d-4614cb8671bd" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:53:44.594368 master-0 kubenswrapper[7518]: I0313 12:53:44.594345 7518 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="ba0afcdaf159bdee5cad84caecac2caf230f2beacc241756ab48e77be0ee5ebb" exitCode=0 Mar 13 12:53:44.594436 master-0 kubenswrapper[7518]: I0313 12:53:44.594373 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerDied","Data":"ba0afcdaf159bdee5cad84caecac2caf230f2beacc241756ab48e77be0ee5ebb"} Mar 13 12:53:44.594436 master-0 kubenswrapper[7518]: I0313 12:53:44.594403 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"dc74469df6e780c8e9e2827ef289651444a1ff65c5b17d5937b4448f9addb191"} Mar 13 12:53:44.595232 master-0 kubenswrapper[7518]: E0313 12:53:44.595195 7518 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:44.595298 master-0 kubenswrapper[7518]: I0313 12:53:44.595256 7518 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:53:44.595819 master-0 kubenswrapper[7518]: I0313 12:53:44.595779 7518 status_manager.go:851] "Failed to get status for pod" podUID="185a10f7-2a4b-4171-b10d-4614cb8671bd" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:53:44.597974 master-0 kubenswrapper[7518]: I0313 12:53:44.597945 7518 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="838f1203bfc2909f5be268d039e5903c4aada457bcd573b0395f4215bfc0c446" exitCode=0 Mar 13 12:53:45.541474 master-0 kubenswrapper[7518]: I0313 12:53:45.541419 7518 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:45.541474 master-0 kubenswrapper[7518]: [-]has-synced failed: reason withheld Mar 13 12:53:45.541474 master-0 kubenswrapper[7518]: [+]process-running ok Mar 13 12:53:45.541474 master-0 kubenswrapper[7518]: healthz check failed Mar 13 12:53:45.541782 master-0 kubenswrapper[7518]: I0313 12:53:45.541490 7518 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:45.679509 master-0 kubenswrapper[7518]: I0313 12:53:45.678887 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"52264b4378a4f3ba83334945450ce98ac9bedab1c6c9485cb885bc9488d52471"} Mar 13 12:53:45.679509 master-0 kubenswrapper[7518]: I0313 12:53:45.678935 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"8ea4f4f1bc69f85c977580ddac21514a71e7c8a91de12b17cbd00d640490e4d3"} Mar 13 12:53:45.679509 master-0 kubenswrapper[7518]: I0313 12:53:45.678952 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"b498f079133d2a2077770b172efd3507414d1897ced1774403305339c6337d85"} Mar 13 12:53:45.679509 master-0 kubenswrapper[7518]: I0313 12:53:45.678968 7518 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"c16c28a17a2035273ad3cbe98ed9a765284a80f578c8eb0748ccdf8c0dbcc66a"} Mar 13 12:53:46.015224 master-0 kubenswrapper[7518]: I0313 12:53:46.015165 7518 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:53:46.182473 master-0 kubenswrapper[7518]: I0313 12:53:46.182334 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-kubelet-dir\") pod \"185a10f7-2a4b-4171-b10d-4614cb8671bd\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " Mar 13 12:53:46.182473 master-0 kubenswrapper[7518]: I0313 12:53:46.182427 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") pod \"185a10f7-2a4b-4171-b10d-4614cb8671bd\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " Mar 13 12:53:46.182473 master-0 kubenswrapper[7518]: I0313 12:53:46.182428 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "185a10f7-2a4b-4171-b10d-4614cb8671bd" (UID: "185a10f7-2a4b-4171-b10d-4614cb8671bd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:46.182473 master-0 kubenswrapper[7518]: I0313 12:53:46.182476 7518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-var-lock\") pod \"185a10f7-2a4b-4171-b10d-4614cb8671bd\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " Mar 13 12:53:46.182867 master-0 kubenswrapper[7518]: I0313 12:53:46.182560 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-var-lock" (OuterVolumeSpecName: "var-lock") pod "185a10f7-2a4b-4171-b10d-4614cb8671bd" (UID: "185a10f7-2a4b-4171-b10d-4614cb8671bd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:46.182867 master-0 kubenswrapper[7518]: I0313 12:53:46.182720 7518 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:46.182867 master-0 kubenswrapper[7518]: I0313 12:53:46.182736 7518 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:46.186209 master-0 kubenswrapper[7518]: I0313 12:53:46.184901 7518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "185a10f7-2a4b-4171-b10d-4614cb8671bd" (UID: "185a10f7-2a4b-4171-b10d-4614cb8671bd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:53:46.224358 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 13 12:53:46.225034 master-0 kubenswrapper[7518]: I0313 12:53:46.224237 7518 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 12:53:46.254130 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 12:53:46.254487 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 13 12:53:46.257795 master-0 systemd[1]: kubelet.service: Consumed 2min 25.255s CPU time. Mar 13 12:53:46.281117 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 13 12:53:46.448232 master-0 kubenswrapper[28149]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:53:46.448232 master-0 kubenswrapper[28149]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 13 12:53:46.448232 master-0 kubenswrapper[28149]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:53:46.448232 master-0 kubenswrapper[28149]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:53:46.448232 master-0 kubenswrapper[28149]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 13 12:53:46.448232 master-0 kubenswrapper[28149]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:53:46.448232 master-0 kubenswrapper[28149]: I0313 12:53:46.447070 28149 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449855 28149 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449876 28149 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449884 28149 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449891 28149 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449897 28149 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449902 28149 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449907 28149 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449912 28149 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449917 28149 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449921 28149 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449925 28149 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449930 28149 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449935 28149 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449960 28149 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449966 28149 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449972 28149 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449977 28149 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449983 28149 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:53:46.452158 master-0 kubenswrapper[28149]: W0313 12:53:46.449988 28149 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.449994 28149 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.449998 28149 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450003 28149 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450007 28149 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450011 28149 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450017 28149 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450022 28149 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450026 28149 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450031 28149 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450037 28149 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450041 28149 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450046 28149 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450065 28149 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450070 28149 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450076 28149 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450080 28149 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450085 28149 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450090 28149 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:53:46.452962 master-0 kubenswrapper[28149]: W0313 12:53:46.450095 28149 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450099 28149 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450104 28149 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450109 28149 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450113 28149 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450118 28149 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450122 28149 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450127 28149 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450146 28149 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450151 28149 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450156 28149 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450160 28149 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450165 28149 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450170 28149 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450175 28149 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450179 28149 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450184 28149 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450188 28149 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450193 28149 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450197 28149 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:53:46.453747 master-0 kubenswrapper[28149]: W0313 12:53:46.450203 28149 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: W0313 12:53:46.450209 28149 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: W0313 12:53:46.450213 28149 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: W0313 12:53:46.450218 28149 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: W0313 12:53:46.450223 28149 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: W0313 12:53:46.450227 28149 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: W0313 12:53:46.450232 28149 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: W0313 12:53:46.450237 28149 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: W0313 12:53:46.450242 28149 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: W0313 12:53:46.450247 28149 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: W0313 12:53:46.450251 28149 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: W0313 12:53:46.450258 28149 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: W0313 12:53:46.450265 28149 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: W0313 12:53:46.450270 28149 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: W0313 12:53:46.450276 28149 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: I0313 12:53:46.450409 28149 flags.go:64] FLAG: --address="0.0.0.0" Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: I0313 12:53:46.450419 28149 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: I0313 12:53:46.450429 28149 flags.go:64] FLAG: --anonymous-auth="true" Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: I0313 12:53:46.450436 28149 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: I0313 12:53:46.450442 28149 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: I0313 12:53:46.450447 28149 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 13 12:53:46.454475 master-0 kubenswrapper[28149]: I0313 12:53:46.450454 28149 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450460 28149 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450465 28149 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450471 28149 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450477 28149 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450482 28149 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450487 28149 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450492 28149 flags.go:64] FLAG: --cgroup-root="" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450497 28149 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450509 28149 flags.go:64] FLAG: --client-ca-file="" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450514 28149 flags.go:64] FLAG: --cloud-config="" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450518 28149 flags.go:64] FLAG: --cloud-provider="" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450523 28149 flags.go:64] FLAG: --cluster-dns="[]" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450529 28149 flags.go:64] FLAG: --cluster-domain="" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450534 28149 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450539 28149 flags.go:64] FLAG: --config-dir="" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450544 28149 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450549 28149 flags.go:64] FLAG: --container-log-max-files="5" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450556 28149 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450561 28149 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450566 28149 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450572 28149 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450576 28149 flags.go:64] FLAG: --contention-profiling="false" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450582 28149 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450587 28149 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 13 12:53:46.455259 master-0 kubenswrapper[28149]: I0313 12:53:46.450592 28149 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450597 28149 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450604 28149 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450609 28149 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450614 28149 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450619 28149 flags.go:64] FLAG: --enable-load-reader="false" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450625 28149 flags.go:64] FLAG: --enable-server="true" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450630 28149 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450638 28149 flags.go:64] FLAG: --event-burst="100" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450644 28149 flags.go:64] FLAG: --event-qps="50" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450650 28149 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450655 28149 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450660 28149 flags.go:64] FLAG: --eviction-hard="" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450668 28149 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450673 28149 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450678 28149 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450683 28149 flags.go:64] FLAG: --eviction-soft="" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450688 28149 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450694 28149 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450699 28149 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450705 28149 flags.go:64] FLAG: --experimental-mounter-path="" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450710 28149 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450717 28149 flags.go:64] FLAG: --fail-swap-on="true" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450722 28149 flags.go:64] FLAG: --feature-gates="" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450728 28149 flags.go:64] FLAG: --file-check-frequency="20s" Mar 13 12:53:46.456130 master-0 kubenswrapper[28149]: I0313 12:53:46.450734 28149 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450739 28149 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450748 28149 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450754 28149 flags.go:64] FLAG: --healthz-port="10248" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450759 28149 flags.go:64] FLAG: --help="false" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450764 28149 flags.go:64] FLAG: --hostname-override="" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450770 28149 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450775 28149 flags.go:64] FLAG: --http-check-frequency="20s" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450780 28149 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450785 28149 flags.go:64] FLAG: --image-credential-provider-config="" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450790 28149 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450795 28149 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450800 28149 flags.go:64] FLAG: --image-service-endpoint="" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450805 28149 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450810 28149 flags.go:64] FLAG: --kube-api-burst="100" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450815 28149 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450821 28149 flags.go:64] FLAG: --kube-api-qps="50" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450826 28149 flags.go:64] FLAG: --kube-reserved="" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450831 28149 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450836 28149 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450842 28149 flags.go:64] FLAG: --kubelet-cgroups="" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450847 28149 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450852 28149 flags.go:64] FLAG: --lock-file="" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450857 28149 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450863 28149 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 13 12:53:46.457098 master-0 kubenswrapper[28149]: I0313 12:53:46.450868 28149 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450877 28149 flags.go:64] FLAG: --log-json-split-stream="false" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450882 28149 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450887 28149 flags.go:64] FLAG: --log-text-split-stream="false" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450893 28149 flags.go:64] FLAG: --logging-format="text" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450898 28149 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450905 28149 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450910 28149 flags.go:64] FLAG: --manifest-url="" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450915 28149 flags.go:64] FLAG: --manifest-url-header="" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450924 28149 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450929 28149 flags.go:64] FLAG: --max-open-files="1000000" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450935 28149 flags.go:64] FLAG: --max-pods="110" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450940 28149 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450945 28149 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450950 28149 flags.go:64] FLAG: --memory-manager-policy="None" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450955 28149 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450961 28149 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450966 28149 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450971 28149 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450983 28149 flags.go:64] FLAG: --node-status-max-images="50" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450988 28149 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450994 28149 flags.go:64] FLAG: --oom-score-adj="-999" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.450999 28149 flags.go:64] FLAG: --pod-cidr="" Mar 13 12:53:46.464307 master-0 kubenswrapper[28149]: I0313 12:53:46.451004 28149 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451013 28149 flags.go:64] FLAG: --pod-manifest-path="" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451018 28149 flags.go:64] FLAG: --pod-max-pids="-1" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451024 28149 flags.go:64] FLAG: --pods-per-core="0" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451029 28149 flags.go:64] FLAG: --port="10250" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451034 28149 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451039 28149 flags.go:64] FLAG: --provider-id="" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451045 28149 flags.go:64] FLAG: --qos-reserved="" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451050 28149 flags.go:64] FLAG: --read-only-port="10255" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451055 28149 flags.go:64] FLAG: --register-node="true" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451061 28149 flags.go:64] FLAG: --register-schedulable="true" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451066 28149 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451075 28149 flags.go:64] FLAG: --registry-burst="10" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451080 28149 flags.go:64] FLAG: --registry-qps="5" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451086 28149 flags.go:64] FLAG: --reserved-cpus="" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451091 28149 flags.go:64] FLAG: --reserved-memory="" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451098 28149 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451103 28149 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451108 28149 flags.go:64] FLAG: --rotate-certificates="false" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451118 28149 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451123 28149 flags.go:64] FLAG: --runonce="false" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451128 28149 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451151 28149 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451157 28149 flags.go:64] FLAG: --seccomp-default="false" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451162 28149 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451167 28149 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 13 12:53:46.470472 master-0 kubenswrapper[28149]: I0313 12:53:46.451173 28149 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451178 28149 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451183 28149 flags.go:64] FLAG: --storage-driver-password="root" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451189 28149 flags.go:64] FLAG: --storage-driver-secure="false" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451193 28149 flags.go:64] FLAG: --storage-driver-table="stats" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451198 28149 flags.go:64] FLAG: --storage-driver-user="root" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451203 28149 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451208 28149 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451213 28149 flags.go:64] FLAG: --system-cgroups="" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451218 28149 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451226 28149 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451231 28149 flags.go:64] FLAG: --tls-cert-file="" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451236 28149 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451242 28149 flags.go:64] FLAG: --tls-min-version="" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451247 28149 flags.go:64] FLAG: --tls-private-key-file="" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451252 28149 flags.go:64] FLAG: --topology-manager-policy="none" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451258 28149 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451263 28149 flags.go:64] FLAG: --topology-manager-scope="container" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451268 28149 flags.go:64] FLAG: --v="2" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451275 28149 flags.go:64] FLAG: --version="false" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451282 28149 flags.go:64] FLAG: --vmodule="" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451288 28149 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: I0313 12:53:46.451293 28149 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: W0313 12:53:46.451418 28149 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:53:46.482174 master-0 kubenswrapper[28149]: W0313 12:53:46.451425 28149 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451434 28149 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451439 28149 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451443 28149 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451448 28149 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451454 28149 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451459 28149 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451465 28149 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451472 28149 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451478 28149 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451484 28149 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451489 28149 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451494 28149 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451498 28149 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451505 28149 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451510 28149 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451515 28149 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451521 28149 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451526 28149 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:53:46.483049 master-0 kubenswrapper[28149]: W0313 12:53:46.451531 28149 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451536 28149 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451541 28149 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451546 28149 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451551 28149 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451559 28149 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451564 28149 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451569 28149 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451574 28149 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451578 28149 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451583 28149 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451588 28149 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451593 28149 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451597 28149 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451604 28149 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451610 28149 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451614 28149 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451619 28149 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451624 28149 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451628 28149 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:53:46.483769 master-0 kubenswrapper[28149]: W0313 12:53:46.451633 28149 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451637 28149 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451643 28149 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451648 28149 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451652 28149 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451656 28149 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451661 28149 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451665 28149 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451711 28149 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451719 28149 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451724 28149 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451729 28149 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451734 28149 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451740 28149 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451746 28149 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451751 28149 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451756 28149 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451763 28149 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451768 28149 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451773 28149 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:53:46.484532 master-0 kubenswrapper[28149]: W0313 12:53:46.451778 28149 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:53:46.486261 master-0 kubenswrapper[28149]: W0313 12:53:46.451798 28149 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:53:46.486261 master-0 kubenswrapper[28149]: W0313 12:53:46.451804 28149 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:53:46.486261 master-0 kubenswrapper[28149]: W0313 12:53:46.451811 28149 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:53:46.486261 master-0 kubenswrapper[28149]: W0313 12:53:46.451817 28149 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:53:46.486261 master-0 kubenswrapper[28149]: W0313 12:53:46.451822 28149 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:53:46.486261 master-0 kubenswrapper[28149]: W0313 12:53:46.451830 28149 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:53:46.486261 master-0 kubenswrapper[28149]: W0313 12:53:46.451835 28149 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:53:46.486261 master-0 kubenswrapper[28149]: W0313 12:53:46.451840 28149 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:53:46.486261 master-0 kubenswrapper[28149]: W0313 12:53:46.451845 28149 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:53:46.486261 master-0 kubenswrapper[28149]: W0313 12:53:46.451851 28149 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:53:46.486261 master-0 kubenswrapper[28149]: W0313 12:53:46.451856 28149 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:53:46.486261 master-0 kubenswrapper[28149]: I0313 12:53:46.451875 28149 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:53:46.486261 master-0 kubenswrapper[28149]: I0313 12:53:46.466885 28149 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 13 12:53:46.486261 master-0 kubenswrapper[28149]: I0313 12:53:46.466920 28149 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 12:53:46.486261 master-0 kubenswrapper[28149]: W0313 12:53:46.467006 28149 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467014 28149 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467020 28149 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467026 28149 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467031 28149 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467036 28149 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467041 28149 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467045 28149 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467050 28149 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467057 28149 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467067 28149 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467072 28149 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467077 28149 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467082 28149 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467087 28149 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467092 28149 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467097 28149 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467103 28149 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467109 28149 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467113 28149 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:53:46.494840 master-0 kubenswrapper[28149]: W0313 12:53:46.467118 28149 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467123 28149 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467127 28149 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467269 28149 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467276 28149 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467281 28149 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467285 28149 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467292 28149 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467300 28149 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467306 28149 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467311 28149 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467316 28149 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467322 28149 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467327 28149 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467334 28149 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467339 28149 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467343 28149 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467348 28149 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467353 28149 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467357 28149 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:53:46.495652 master-0 kubenswrapper[28149]: W0313 12:53:46.467362 28149 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467367 28149 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467372 28149 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467377 28149 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467383 28149 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467389 28149 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467394 28149 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467400 28149 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467405 28149 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467410 28149 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467415 28149 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467420 28149 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467425 28149 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467430 28149 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467435 28149 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467440 28149 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467445 28149 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467449 28149 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467454 28149 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467459 28149 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:53:46.496419 master-0 kubenswrapper[28149]: W0313 12:53:46.467464 28149 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:53:46.502373 master-0 kubenswrapper[28149]: W0313 12:53:46.467469 28149 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:53:46.502373 master-0 kubenswrapper[28149]: W0313 12:53:46.467473 28149 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:53:46.502373 master-0 kubenswrapper[28149]: W0313 12:53:46.467478 28149 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:53:46.502373 master-0 kubenswrapper[28149]: W0313 12:53:46.467483 28149 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:53:46.502373 master-0 kubenswrapper[28149]: W0313 12:53:46.467487 28149 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:53:46.502373 master-0 kubenswrapper[28149]: W0313 12:53:46.467492 28149 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:53:46.502373 master-0 kubenswrapper[28149]: W0313 12:53:46.467498 28149 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:53:46.502373 master-0 kubenswrapper[28149]: W0313 12:53:46.467503 28149 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:53:46.502373 master-0 kubenswrapper[28149]: W0313 12:53:46.467510 28149 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:53:46.502373 master-0 kubenswrapper[28149]: W0313 12:53:46.467516 28149 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:53:46.502373 master-0 kubenswrapper[28149]: W0313 12:53:46.467522 28149 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:53:46.502373 master-0 kubenswrapper[28149]: I0313 12:53:46.467531 28149 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:53:46.502373 master-0 kubenswrapper[28149]: W0313 12:53:46.467688 28149 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:53:46.502373 master-0 kubenswrapper[28149]: W0313 12:53:46.467701 28149 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467708 28149 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467713 28149 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467718 28149 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467723 28149 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467727 28149 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467732 28149 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467737 28149 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467741 28149 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467746 28149 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467750 28149 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467755 28149 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467759 28149 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467764 28149 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467769 28149 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467773 28149 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467778 28149 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467783 28149 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467788 28149 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467793 28149 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:53:46.502942 master-0 kubenswrapper[28149]: W0313 12:53:46.467797 28149 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467802 28149 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467806 28149 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467813 28149 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467819 28149 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467824 28149 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467829 28149 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467833 28149 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467838 28149 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467842 28149 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467846 28149 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467851 28149 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467862 28149 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467867 28149 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467872 28149 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467876 28149 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467881 28149 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467886 28149 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467893 28149 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:53:46.506524 master-0 kubenswrapper[28149]: W0313 12:53:46.467898 28149 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467902 28149 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467907 28149 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467912 28149 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467917 28149 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467922 28149 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467926 28149 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467931 28149 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467936 28149 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467941 28149 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467945 28149 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467950 28149 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467954 28149 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467959 28149 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467964 28149 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467968 28149 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467973 28149 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467978 28149 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467982 28149 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467987 28149 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:53:46.507230 master-0 kubenswrapper[28149]: W0313 12:53:46.467992 28149 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:53:46.507955 master-0 kubenswrapper[28149]: W0313 12:53:46.467997 28149 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:53:46.507955 master-0 kubenswrapper[28149]: W0313 12:53:46.468002 28149 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:53:46.507955 master-0 kubenswrapper[28149]: W0313 12:53:46.468007 28149 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:53:46.507955 master-0 kubenswrapper[28149]: W0313 12:53:46.468012 28149 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:53:46.507955 master-0 kubenswrapper[28149]: W0313 12:53:46.468016 28149 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:53:46.507955 master-0 kubenswrapper[28149]: W0313 12:53:46.468021 28149 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:53:46.507955 master-0 kubenswrapper[28149]: W0313 12:53:46.468028 28149 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:53:46.507955 master-0 kubenswrapper[28149]: W0313 12:53:46.468034 28149 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:53:46.507955 master-0 kubenswrapper[28149]: W0313 12:53:46.468041 28149 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:53:46.507955 master-0 kubenswrapper[28149]: W0313 12:53:46.468046 28149 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:53:46.507955 master-0 kubenswrapper[28149]: W0313 12:53:46.468052 28149 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:53:46.507955 master-0 kubenswrapper[28149]: I0313 12:53:46.468060 28149 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:53:46.507955 master-0 kubenswrapper[28149]: I0313 12:53:46.468374 28149 server.go:940] "Client rotation is on, will bootstrap in background" Mar 13 12:53:46.507955 master-0 kubenswrapper[28149]: I0313 12:53:46.479360 28149 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 13 12:53:46.508495 master-0 kubenswrapper[28149]: I0313 12:53:46.481408 28149 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 13 12:53:46.508495 master-0 kubenswrapper[28149]: I0313 12:53:46.481823 28149 server.go:997] "Starting client certificate rotation" Mar 13 12:53:46.508495 master-0 kubenswrapper[28149]: I0313 12:53:46.481844 28149 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 13 12:53:46.508495 master-0 kubenswrapper[28149]: I0313 12:53:46.482862 28149 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-14 12:26:40 +0000 UTC, rotation deadline is 2026-03-14 08:09:09.963396288 +0000 UTC Mar 13 12:53:46.508495 master-0 kubenswrapper[28149]: I0313 12:53:46.482936 28149 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h15m23.480464459s for next certificate rotation Mar 13 12:53:46.508495 master-0 kubenswrapper[28149]: I0313 12:53:46.482980 28149 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 12:53:46.508495 master-0 kubenswrapper[28149]: I0313 12:53:46.484695 28149 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 12:53:46.508495 master-0 kubenswrapper[28149]: I0313 12:53:46.504532 28149 log.go:25] "Validated CRI v1 runtime API" Mar 13 12:53:46.521245 master-0 kubenswrapper[28149]: I0313 12:53:46.520767 28149 log.go:25] "Validated CRI v1 image API" Mar 13 12:53:46.532163 master-0 kubenswrapper[28149]: I0313 12:53:46.531548 28149 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 12:53:46.545160 master-0 kubenswrapper[28149]: I0313 12:53:46.544553 28149 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 bee91dc0-9d5b-4e60-b908-76b0c18f6366:/dev/vda3] Mar 13 12:53:46.550197 master-0 kubenswrapper[28149]: I0313 12:53:46.544617 28149 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/02065d5b43e51a34d865fcf740815dfc300cc50dd65b4465588c2f46e47c4755/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/02065d5b43e51a34d865fcf740815dfc300cc50dd65b4465588c2f46e47c4755/userdata/shm major:0 minor:799 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/02db34ef289b2a257fb361c5e1190f74ebf2b35e8d2ff6177192f08616db19aa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/02db34ef289b2a257fb361c5e1190f74ebf2b35e8d2ff6177192f08616db19aa/userdata/shm major:0 minor:681 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/06b5a40ca00c0683426a1707f8de8aa68ed5666ea8cb726727703876312ec6d0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/06b5a40ca00c0683426a1707f8de8aa68ed5666ea8cb726727703876312ec6d0/userdata/shm major:0 minor:658 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0b83ebe9d6eac21a54c3830c4cd62ad02d28ed6f976f2ea34a3538e434b5beb0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0b83ebe9d6eac21a54c3830c4cd62ad02d28ed6f976f2ea34a3538e434b5beb0/userdata/shm major:0 minor:439 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0d4645e0a294cbcc940fcfffa42d733be306f63d83bb6e85a675a05c4f244808/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0d4645e0a294cbcc940fcfffa42d733be306f63d83bb6e85a675a05c4f244808/userdata/shm major:0 minor:762 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0efd5eb82a3bcc3e1df342102496e59fd5b2f395bc25671cea43a0422444ad1d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0efd5eb82a3bcc3e1df342102496e59fd5b2f395bc25671cea43a0422444ad1d/userdata/shm major:0 minor:633 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/11e56b22c0ca61c66515f175bbe9f8fe67513a2c89d80968a1d368bbdad873da/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/11e56b22c0ca61c66515f175bbe9f8fe67513a2c89d80968a1d368bbdad873da/userdata/shm major:0 minor:1009 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/173b3354a692a16e1dac4e0c613765bd4dc76c18f400e62b22fb91f5a2c1aaca/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/173b3354a692a16e1dac4e0c613765bd4dc76c18f400e62b22fb91f5a2c1aaca/userdata/shm major:0 minor:463 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1f2ba041c75397f172b0e8393f3ba52da66efb5011242b7893cceb36ffb01a0a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1f2ba041c75397f172b0e8393f3ba52da66efb5011242b7893cceb36ffb01a0a/userdata/shm major:0 minor:410 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1f8e6ca57afc2c7f1b75640b9d76490f87697f57e3507366ea9d48c029b1f4d6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1f8e6ca57afc2c7f1b75640b9d76490f87697f57e3507366ea9d48c029b1f4d6/userdata/shm major:0 minor:242 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/259b8c4f70e310b1a2310215be2034d29d1f6b96a9b3aac30e2098e024daf661/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/259b8c4f70e310b1a2310215be2034d29d1f6b96a9b3aac30e2098e024daf661/userdata/shm major:0 minor:1012 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2a520ce1540e4505903e0c09b3c7ff382c5a6347945280110eeacb275245a884/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2a520ce1540e4505903e0c09b3c7ff382c5a6347945280110eeacb275245a884/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2b2ef2ddaedb81fecd10454e7de227fc33e0631466b7f1d7f0c388f2e1883f04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2b2ef2ddaedb81fecd10454e7de227fc33e0631466b7f1d7f0c388f2e1883f04/userdata/shm major:0 minor:448 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2bd86a5a786b8cd9854f1e649c41cebb309a3c1ac190ae67ed40c19b3eec0d04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2bd86a5a786b8cd9854f1e649c41cebb309a3c1ac190ae67ed40c19b3eec0d04/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2e65250ae5f98234b34351e57ed90215912c9eb2d91f1f748ce0046b50854a52/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2e65250ae5f98234b34351e57ed90215912c9eb2d91f1f748ce0046b50854a52/userdata/shm major:0 minor:706 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/31c103b44a6104346bc94bbde90d17a3c1f1dc78c81990683bc98b314baa42f3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/31c103b44a6104346bc94bbde90d17a3c1f1dc78c81990683bc98b314baa42f3/userdata/shm major:0 minor:650 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/327b75ff7d2f2b23c89b69896efc61025e5eb89aca44a3ec0a496ee1ba0617ea/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/327b75ff7d2f2b23c89b69896efc61025e5eb89aca44a3ec0a496ee1ba0617ea/userdata/shm major:0 minor:1060 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/33eb1753d1610b81e5a24f93d9249c8e3e11614421397b68063a0f4b3b803691/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/33eb1753d1610b81e5a24f93d9249c8e3e11614421397b68063a0f4b3b803691/userdata/shm major:0 minor:787 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/36730131d5c09051d26cf3e4a543df7abc5397cb1ce5ef8363c603313b0f97b0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/36730131d5c09051d26cf3e4a543df7abc5397cb1ce5ef8363c603313b0f97b0/userdata/shm major:0 minor:769 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/37d33fead87bedc9ebd143b0294923b633e8d9e7d47a848ec4d50fbd02e27628/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/37d33fead87bedc9ebd143b0294923b633e8d9e7d47a848ec4d50fbd02e27628/userdata/shm major:0 minor:475 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3f65f8f162278830720a8d0df1f4af830419eb457612c65a706c42ccf3c12587/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3f65f8f162278830720a8d0df1f4af830419eb457612c65a706c42ccf3c12587/userdata/shm major:0 minor:395 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3f6f1ed4b9428b71641a87701412cc5bbb34559ce861fd12caebd021e4bfc58b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3f6f1ed4b9428b71641a87701412cc5bbb34559ce861fd12caebd021e4bfc58b/userdata/shm major:0 minor:1048 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4100d060137e4638140caf3273251902712a7f8176df0de3da8bd3abf9194231/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4100d060137e4638140caf3273251902712a7f8176df0de3da8bd3abf9194231/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4641cab9868e3327d01299b932a32e6567401ef53f9b8cc74562f50d7b0926ca/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4641cab9868e3327d01299b932a32e6567401ef53f9b8cc74562f50d7b0926ca/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4923fdf0bf7675fa9b87a52fcb37d82a429121c63cdefd19c58f0e547211a622/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4923fdf0bf7675fa9b87a52fcb37d82a429121c63cdefd19c58f0e547211a622/userdata/shm major:0 minor:686 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4a2e539e0bcc34335d49c02d69347bd6d8232a1bb972540a7de9aececb6d671f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4a2e539e0bcc34335d49c02d69347bd6d8232a1bb972540a7de9aececb6d671f/userdata/shm major:0 minor:752 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4a6cc550d523ce1bfed748c19240f1c4e3a9202060aead91cc14af91ea48f5ce/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4a6cc550d523ce1bfed748c19240f1c4e3a9202060aead91cc14af91ea48f5ce/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/534692e5957aae2c3d6d9152a87bd37d178574b231da74f33889bcb3869aae82/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/534692e5957aae2c3d6d9152a87bd37d178574b231da74f33889bcb3869aae82/userdata/shm major:0 minor:105 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5392be7ab4e8fd67e380477649b224dee24aa1e239336e87f916d5fb0198c7d5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5392be7ab4e8fd67e380477649b224dee24aa1e239336e87f916d5fb0198c7d5/userdata/shm major:0 minor:796 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5c9d522ae739e2277c0296ac70334b7f1898acab312dd9c5c15576df36650d2b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5c9d522ae739e2277c0296ac70334b7f1898acab312dd9c5c15576df36650d2b/userdata/shm major:0 minor:628 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5d54ffc470f89711bfd74406a6ddbacbe1dd4ef841888f957b998a6253057999/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5d54ffc470f89711bfd74406a6ddbacbe1dd4ef841888f957b998a6253057999/userdata/shm major:0 minor:774 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5e03538f7a196b4948a3a7782b34246a467d9e14e18b21bed24c1061ee7390ce/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5e03538f7a196b4948a3a7782b34246a467d9e14e18b21bed24c1061ee7390ce/userdata/shm major:0 minor:240 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5f581d90a0a82a94fc080eaf7d47e92e9bf51aec1be87f8c182f38bf6bb3aa3c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5f581d90a0a82a94fc080eaf7d47e92e9bf51aec1be87f8c182f38bf6bb3aa3c/userdata/shm major:0 minor:303 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6096081d86dfbfa09ca1bdec91da24d4ddf5b823468c93d6e9e22822357294bc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6096081d86dfbfa09ca1bdec91da24d4ddf5b823468c93d6e9e22822357294bc/userdata/shm major:0 minor:858 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/69fae5f2ef7c0575f1ee9aa46fd22ae7b8ff711dadd59b1c832eda467b9991cd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/69fae5f2ef7c0575f1ee9aa46fd22ae7b8ff711dadd59b1c832eda467b9991cd/userdata/shm major:0 minor:437 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6b92704fbc97116df7b90609a695c48539a6c6401fd9288883ce4ea92059b841/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6b92704fbc97116df7b90609a695c48539a6c6401fd9288883ce4ea92059b841/userdata/shm major:0 minor:801 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7066c2bb7f28cfd07ac1eb011cdc9849969ed5f37788da395910309c70481aa9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7066c2bb7f28cfd07ac1eb011cdc9849969ed5f37788da395910309c70481aa9/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/754a980682251c2faf310af15f0042fda13df9ae03c81a3a698c0d687faffa20/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/754a980682251c2faf310af15f0042fda13df9ae03c81a3a698c0d687faffa20/userdata/shm major:0 minor:257 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/842bc57e6bbe56242bef7b88438357fe374fd511b54a67e77b67b5f32ad709e8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/842bc57e6bbe56242bef7b88438357fe374fd511b54a67e77b67b5f32ad709e8/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8bb2d1af6db83f391d6e2aae6571d80b39fa6657f68665d4c9aa939bfcdacfe3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8bb2d1af6db83f391d6e2aae6571d80b39fa6657f68665d4c9aa939bfcdacfe3/userdata/shm major:0 minor:487 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f2520a5a8a4d59a3a9c1df60e2638463688675ec7d03c44c89816280d167889/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f2520a5a8a4d59a3a9c1df60e2638463688675ec7d03c44c89816280d167889/userdata/shm major:0 minor:296 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f7395682c642b2e4f7ba2a9b79331d0b9afd8c7d7923a7bbdfc90aaeb45a6c2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f7395682c642b2e4f7ba2a9b79331d0b9afd8c7d7923a7bbdfc90aaeb45a6c2/userdata/shm major:0 minor:109 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/914d6236fd6885067cb3f7c4a3330427cd513d826dd28ffcdcc4fb60809af1e7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/914d6236fd6885067cb3f7c4a3330427cd513d826dd28ffcdcc4fb60809af1e7/userdata/shm major:0 minor:457 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95a0596df1becc3efa730840acdf49174a4f5a349b4eb826cfe7185b3ca3bcfa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95a0596df1becc3efa730840acdf49174a4f5a349b4eb826cfe7185b3ca3bcfa/userdata/shm major:0 minor:611 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/970eeb7c4ac93691f1016454e092dba89eb2fcc2d1e0d15b1982b71ff313707c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/970eeb7c4ac93691f1016454e092dba89eb2fcc2d1e0d15b1982b71ff313707c/userdata/shm major:0 minor:489 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9f0c754e60ef175d41e372a61f68bf008bd4fa86f313ae1ab6dd7da87027e47f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9f0c754e60ef175d41e372a61f68bf008bd4fa86f313ae1ab6dd7da87027e47f/userdata/shm major:0 minor:1090 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a0efa1bf3eba5a2ca6c57d7440e21de8f77ce06cd058d6cbb24dd5784e78863f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a0efa1bf3eba5a2ca6c57d7440e21de8f77ce06cd058d6cbb24dd5784e78863f/userdata/shm major:0 minor:622 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a13f1b34007cf32fe962f7d50d2988f0f66eb3022aee3b3a767d84bde6caed30/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a13f1b34007cf32fe962f7d50d2988f0f66eb3022aee3b3a767d84bde6caed30/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a3ffdbf0e263655894f67c3d77b8923c8263311f04a159ccc83606c42c70fddb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a3ffdbf0e263655894f67c3d77b8923c8263311f04a159ccc83606c42c70fddb/userdata/shm major:0 minor:1008 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a4591749866252389a99d8d167ffc17036d5b09d044139535fc2027e3c84b038/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a4591749866252389a99d8d167ffc17036d5b09d044139535fc2027e3c84b038/userdata/shm major:0 minor:333 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aa04b90f16ed80e22ecfe4066cdbfb20ddc6e64977b5d63203a00d19ce4e1333/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aa04b90f16ed80e22ecfe4066cdbfb20ddc6e64977b5d63203a00d19ce4e1333/userdata/shm major:0 minor:543 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aac9e43b541ff8c2c2bfb86003c0c12881f81493b0818cd60c9ba62d916d93a2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aac9e43b541ff8c2c2bfb86003c0c12881f81493b0818cd60c9ba62d916d93a2/userdata/shm major:0 minor:84 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/abc95f00c9e0c52ab8e7354cef7b322da886c1a2e03c03fc7c2109630be9ce0b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/abc95f00c9e0c52ab8e7354cef7b322da886c1a2e03c03fc7c2109630be9ce0b/userdata/shm major:0 minor:244 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ac4a42c40018650481568cd3e3f0125e785e9eec1d03bfa3009fd0ee7e80a629/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ac4a42c40018650481568cd3e3f0125e785e9eec1d03bfa3009fd0ee7e80a629/userdata/shm major:0 minor:308 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aff2f4bdb8410e55f89c70c290b0ee60c11f3e12de8945726a3ee53766f5711f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aff2f4bdb8410e55f89c70c290b0ee60c11f3e12de8945726a3ee53766f5711f/userdata/shm major:0 minor:1082 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b046991449e1d420ea17d254f8c05faec355e4aacc147507b98a3f095fa7ff11/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b046991449e1d420ea17d254f8c05faec355e4aacc147507b98a3f095fa7ff11/userdata/shm major:0 minor:89 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b699b1831b9f5250a8ce5ada14edbc693482d02c81ce7cd3de76c7bdd381af20/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b699b1831b9f5250a8ce5ada14edbc693482d02c81ce7cd3de76c7bdd381af20/userdata/shm major:0 minor:791 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b6b12c0272b98e12411fc073869054a756107907b9e525ec9dbf8b8648e84805/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b6b12c0272b98e12411fc073869054a756107907b9e525ec9dbf8b8648e84805/userdata/shm major:0 minor:237 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bad7583a8d87a54f610f7ff59977a30650055c862ace4c5e9beab2a18620861a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bad7583a8d87a54f610f7ff59977a30650055c862ace4c5e9beab2a18620861a/userdata/shm major:0 minor:248 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c2b846fb7ae8217762a980bc271d109131601f29417428a6bf3bd52ed70a5227/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c2b846fb7ae8217762a980bc271d109131601f29417428a6bf3bd52ed70a5227/userdata/shm major:0 minor:428 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c4aea2db722bdac5b7168c49e752c46da9432061c6c515522534eb8c4d6126b5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c4aea2db722bdac5b7168c49e752c46da9432061c6c515522534eb8c4d6126b5/userdata/shm major:0 minor:574 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c7cc0f12daf98f8c149d5ab9799aa0a44614ca17d39dc2c0de31acb11cb8513a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c7cc0f12daf98f8c149d5ab9799aa0a44614ca17d39dc2c0de31acb11cb8513a/userdata/shm major:0 minor:458 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c877a6a31ad16c9b3f6d1a10e247940a86d22f389ab82d4b655a52c5c8ebc0a4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c877a6a31ad16c9b3f6d1a10e247940a86d22f389ab82d4b655a52c5c8ebc0a4/userdata/shm major:0 minor:803 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c947bd9963641afb60859a3b7c244810b57b25926def17f475843b4b80fe1d04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c947bd9963641afb60859a3b7c244810b57b25926def17f475843b4b80fe1d04/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ca4392c691682c0095dfe8e779e3de1082f741c49a5ae52776e0a4782a168b3b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ca4392c691682c0095dfe8e779e3de1082f741c49a5ae52776e0a4782a168b3b/userdata/shm major:0 minor:1142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/caf607baad46071737a7ad295cff2dc8569126a9cada0edb3e0461efe66c6a52/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/caf607baad46071737a7ad295cff2dc8569126a9cada0edb3e0461efe66c6a52/userdata/shm major:0 minor:639 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d211f6630b0e510a98b862295b3b4e01e3b8d0f319a2b5a7fbad71f4b348ebd3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d211f6630b0e510a98b862295b3b4e01e3b8d0f319a2b5a7fbad71f4b348ebd3/userdata/shm major:0 minor:1086 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d6d4028b66b05354ce39cae63e764e8ed5f2304a82f8cd6cbd59c6a8537a5bed/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d6d4028b66b05354ce39cae63e764e8ed5f2304a82f8cd6cbd59c6a8537a5bed/userdata/shm major:0 minor:793 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d841a86b661f54cc41ca6d7f060def7405c52e9adcc79d02bb6a1a6bb94e4f40/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d841a86b661f54cc41ca6d7f060def7405c52e9adcc79d02bb6a1a6bb94e4f40/userdata/shm major:0 minor:455 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dc74469df6e780c8e9e2827ef289651444a1ff65c5b17d5937b4448f9addb191/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dc74469df6e780c8e9e2827ef289651444a1ff65c5b17d5937b4448f9addb191/userdata/shm major:0 minor:1046 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e2d9f98170b9be57120af2a3d4ad3e87888e64c3d58e7180a2211b7ab3fd61c6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e2d9f98170b9be57120af2a3d4ad3e87888e64c3d58e7180a2211b7ab3fd61c6/userdata/shm major:0 minor:154 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e536f971c5136f6b4bf02b1c06e15888a2ce0d84bff74c72b773c7dfe08129dc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e536f971c5136f6b4bf02b1c06e15888a2ce0d84bff74c72b773c7dfe08129dc/userdata/shm major:0 minor:461 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e631d83a1a86fd29ec9a08d7d593e19783f91c18b20dce846f07ab60e82a0c6e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e631d83a1a86fd29ec9a08d7d593e19783f91c18b20dce846f07ab60e82a0c6e/userdata/shm major:0 minor:983 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e7d5143ee528d1b1b82a3ddf6b2e4a81cfc844b962f0b1dce63b2e1946f0f7b1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e7d5143ee528d1b1b82a3ddf6b2e4a81cfc844b962f0b1dce63b2e1946f0f7b1/userdata/shm major:0 minor:465 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f02f7e100e251060c54156f4f1beac07154b4cae59d3669639dcb3b98dca6124/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f02f7e100e251060c54156f4f1beac07154b4cae59d3669639dcb3b98dca6124/userdata/shm major:0 minor:868 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f1cb9ab9a282ce90062e66d658d9cac8cb109a67f4786999b66ddea942eec412/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f1cb9ab9a282ce90062e66d658d9cac8cb109a67f4786999b66ddea942eec412/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f4bdadfb01202ddc6464892800ff63c99a7021c118d9d6dada777648c97106ba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f4bdadfb01202ddc6464892800ff63c99a7021c118d9d6dada777648c97106ba/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f7b194f18885cd869cc30349fb7d97bcdda7984dea9fb20d14a3e9436a39dc13/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f7b194f18885cd869cc30349fb7d97bcdda7984dea9fb20d14a3e9436a39dc13/userdata/shm major:0 minor:1006 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00d8a21b-701c-4334-9dda-34c28b417f42/volumes/kubernetes.io~projected/kube-api-access-bdxqb:{mountpoint:/var/lib/kubelet/pods/00d8a21b-701c-4334-9dda-34c28b417f42/volumes/kubernetes.io~projected/kube-api-access-bdxqb major:0 minor:676 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00d8a21b-701c-4334-9dda-34c28b417f42/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/00d8a21b-701c-4334-9dda-34c28b417f42/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:675 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00ebdf06-1f44-40cd-87e5-54195188b6d4/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/00ebdf06-1f44-40cd-87e5-54195188b6d4/volumes/kubernetes.io~projected/ca-certs major:0 minor:433 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00ebdf06-1f44-40cd-87e5-54195188b6d4/volumes/kubernetes.io~projected/kube-api-access-7rkc4:{mountpoint:/var/lib/kubelet/pods/00ebdf06-1f44-40cd-87e5-54195188b6d4/volumes/kubernetes.io~projected/kube-api-access-7rkc4 major:0 minor:436 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00ebdf06-1f44-40cd-87e5-54195188b6d4/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/00ebdf06-1f44-40cd-87e5-54195188b6d4/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:434 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/034aaf8e-95df-4171-bae4-e7abe58d15f7/volumes/kubernetes.io~projected/kube-api-access-5w5r2:{mountpoint:/var/lib/kubelet/pods/034aaf8e-95df-4171-bae4-e7abe58d15f7/volumes/kubernetes.io~projected/kube-api-access-5w5r2 major:0 minor:295 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/034aaf8e-95df-4171-bae4-e7abe58d15f7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/034aaf8e-95df-4171-bae4-e7abe58d15f7/volumes/kubernetes.io~secret/serving-cert major:0 minor:289 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/081a08d6-a4fd-412c-81c3-1364c36f0f15/volumes/kubernetes.io~projected/kube-api-access-mz927:{mountpoint:/var/lib/kubelet/pods/081a08d6-a4fd-412c-81c3-1364c36f0f15/volumes/kubernetes.io~projected/kube-api-access-mz927 major:0 minor:1047 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/081a08d6-a4fd-412c-81c3-1364c36f0f15/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/081a08d6-a4fd-412c-81c3-1364c36f0f15/volumes/kubernetes.io~secret/certs major:0 minor:1038 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/081a08d6-a4fd-412c-81c3-1364c36f0f15/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/081a08d6-a4fd-412c-81c3-1364c36f0f15/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1039 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa/volumes/kubernetes.io~projected/kube-api-access-vg8tz:{mountpoint:/var/lib/kubelet/pods/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa/volumes/kubernetes.io~projected/kube-api-access-vg8tz major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08e2bc8e-ca80-454c-81dc-211d122e32e0/volumes/kubernetes.io~projected/kube-api-access-xstz5:{mountpoint:/var/lib/kubelet/pods/08e2bc8e-ca80-454c-81dc-211d122e32e0/volumes/kubernetes.io~projected/kube-api-access-xstz5 major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0da84bb7-e936-49a0-96b5-614a1305d6a4/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/0da84bb7-e936-49a0-96b5-614a1305d6a4/volumes/kubernetes.io~projected/kube-api-access major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0da84bb7-e936-49a0-96b5-614a1305d6a4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0da84bb7-e936-49a0-96b5-614a1305d6a4/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1081e565-b7d8-4b6e-9d41-5db36cfe094c/volumes/kubernetes.io~projected/kube-api-access-b726x:{mountpoint:/var/lib/kubelet/pods/1081e565-b7d8-4b6e-9d41-5db36cfe094c/volumes/kubernetes.io~projected/kube-api-access-b726x major:0 minor:1081 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1081e565-b7d8-4b6e-9d41-5db36cfe094c/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/1081e565-b7d8-4b6e-9d41-5db36cfe094c/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1078 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1081e565-b7d8-4b6e-9d41-5db36cfe094c/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/1081e565-b7d8-4b6e-9d41-5db36cfe094c/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1072 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3/volumes/kubernetes.io~projected/kube-api-access-zbk4f:{mountpoint:/var/lib/kubelet/pods/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3/volumes/kubernetes.io~projected/kube-api-access-zbk4f major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3/volumes/kubernetes.io~secret/srv-cert major:0 minor:474 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/13710582-eac3-42e5-b28a-8b4fd3030af2/volumes/kubernetes.io~projected/kube-api-access-vpfv9:{mountpoint:/var/lib/kubelet/pods/13710582-eac3-42e5-b28a-8b4fd3030af2/volumes/kubernetes.io~projected/kube-api-access-vpfv9 major:0 minor:641 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/13f32761-b386-4f93-b3c0-b16ea53d338a/volumes/kubernetes.io~projected/kube-api-access-m2p67:{mountpoint:/var/lib/kubelet/pods/13f32761-b386-4f93-b3c0-b16ea53d338a/volumes/kubernetes.io~projected/kube-api-access-m2p67 major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/13f32761-b386-4f93-b3c0-b16ea53d338a/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/13f32761-b386-4f93-b3c0-b16ea53d338a/volumes/kubernetes.io~secret/metrics-tls major:0 minor:451 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/152689b1-5875-4a9a-bb25-bee858523168/volumes/kubernetes.io~projected/kube-api-access-km69t:{mountpoint:/var/lib/kubelet/pods/152689b1-5875-4a9a-bb25-bee858523168/volumes/kubernetes.io~projected/kube-api-access-km69t major:0 minor:115 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~projected/kube-api-access-clrz7:{mountpoint:/var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~projected/kube-api-access-clrz7 major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~secret/etcd-client major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18ffa620-dacc-4b09-be04-2c325f860813/volumes/kubernetes.io~projected/kube-api-access-fmzhw:{mountpoint:/var/lib/kubelet/pods/18ffa620-dacc-4b09-be04-2c325f860813/volumes/kubernetes.io~projected/kube-api-access-fmzhw major:0 minor:680 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18ffa620-dacc-4b09-be04-2c325f860813/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/18ffa620-dacc-4b09-be04-2c325f860813/volumes/kubernetes.io~secret/serving-cert major:0 minor:679 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1cf388b6-e4a7-41db-a350-1b503214efd3/volumes/kubernetes.io~projected/kube-api-access-9kxx9:{mountpoint:/var/lib/kubelet/pods/1cf388b6-e4a7-41db-a350-1b503214efd3/volumes/kubernetes.io~projected/kube-api-access-9kxx9 major:0 minor:610 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/volumes/kubernetes.io~projected/kube-api-access-brzd4:{mountpoint:/var/lib/kubelet/pods/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/volumes/kubernetes.io~projected/kube-api-access-brzd4 major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/volumes/kubernetes.io~secret/webhook-cert major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/269aedfd-4274-4998-bd0d-603b67257666/volumes/kubernetes.io~projected/kube-api-access-btf8q:{mountpoint:/var/lib/kubelet/pods/269aedfd-4274-4998-bd0d-603b67257666/volumes/kubernetes.io~projected/kube-api-access-btf8q major:0 minor:307 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29b6aa89-0416-4595-9deb-10b290521d86/volumes/kubernetes.io~projected/kube-api-access-cbtjs:{mountpoint:/var/lib/kubelet/pods/29b6aa89-0416-4595-9deb-10b290521d86/volumes/kubernetes.io~projected/kube-api-access-cbtjs major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29b6aa89-0416-4595-9deb-10b290521d86/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/29b6aa89-0416-4595-9deb-10b290521d86/volumes/kubernetes.io~secret/metrics-certs major:0 minor:473 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f48243b-6b05-4efa-8420-58a4419622bf/volumes/kubernetes.io~projected/kube-api-access-qhddd:{mountpoint:/var/lib/kubelet/pods/2f48243b-6b05-4efa-8420-58a4419622bf/volumes/kubernetes.io~projected/kube-api-access-qhddd major:0 minor:540 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f48243b-6b05-4efa-8420-58a4419622bf/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/2f48243b-6b05-4efa-8420-58a4419622bf/volumes/kubernetes.io~secret/encryption-config major:0 minor:535 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f48243b-6b05-4efa-8420-58a4419622bf/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/2f48243b-6b05-4efa-8420-58a4419622bf/volumes/kubernetes.io~secret/etcd-client major:0 minor:537 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f48243b-6b05-4efa-8420-58a4419622bf/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2f48243b-6b05-4efa-8420-58a4419622bf/volumes/kubernetes.io~secret/serving-cert major:0 minor:536 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f79578c-bbfb-4968-893a-730deb4c01f9/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/2f79578c-bbfb-4968-893a-730deb4c01f9/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f79578c-bbfb-4968-893a-730deb4c01f9/volumes/kubernetes.io~projected/kube-api-access-f9hks:{mountpoint:/var/lib/kubelet/pods/2f79578c-bbfb-4968-893a-730deb4c01f9/volumes/kubernetes.io~projected/kube-api-access-f9hks major:0 minor:302 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f79578c-bbfb-4968-893a-730deb4c01f9/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/2f79578c-bbfb-4968-893a-730deb4c01f9/volumes/kubernetes.io~secret/metrics-tls major:0 minor:445 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3020d236-03e0-4916-97dd-f1085632ca43/volumes/kubernetes.io~projected/kube-api-access-c24hd:{mountpoint:/var/lib/kubelet/pods/3020d236-03e0-4916-97dd-f1085632ca43/volumes/kubernetes.io~projected/kube-api-access-c24hd major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3020d236-03e0-4916-97dd-f1085632ca43/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/3020d236-03e0-4916-97dd-f1085632ca43/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:447 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3020d236-03e0-4916-97dd-f1085632ca43/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/3020d236-03e0-4916-97dd-f1085632ca43/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:452 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/317af639-269e-4163-8e24-fcea468b9352/volumes/kubernetes.io~projected/kube-api-access-4v66x:{mountpoint:/var/lib/kubelet/pods/317af639-269e-4163-8e24-fcea468b9352/volumes/kubernetes.io~projected/kube-api-access-4v66x major:0 minor:782 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/317af639-269e-4163-8e24-fcea468b9352/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/317af639-269e-4163-8e24-fcea468b9352/volumes/kubernetes.io~secret/cert major:0 minor:776 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/317af639-269e-4163-8e24-fcea468b9352/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/317af639-269e-4163-8e24-fcea468b9352/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:771 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/32fe77f9-082d-491c-b3d0-9c10feaf4a8e/volumes/kubernetes.io~projected/kube-api-access-6x492:{mountpoint:/var/lib/kubelet/pods/32fe77f9-082d-491c-b3d0-9c10feaf4a8e/volumes/kubernetes.io~projected/kube-api-access-6x492 major:0 minor:657 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/36ad5a83-5c32-4941-94e0-7af86ac5d462/volumes/kubernetes.io~projected/kube-api-access-mqsh5:{mountpoint:/var/lib/kubelet/pods/36ad5a83-5c32-4941-94e0-7af86ac5d462/volumes/kubernetes.io~projected/kube-api-access-mqsh5 major:0 minor:563 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/36ad5a83-5c32-4941-94e0-7af86ac5d462/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/36ad5a83-5c32-4941-94e0-7af86ac5d462/volumes/kubernetes.io~secret/webhook-certs major:0 minor:402 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d653e1a-5903-4a02-9357-df145f028c0d/volumes/kubernetes.io~projected/kube-api-access-6x8kz:{mountpoint:/var/lib/kubelet/pods/3d653e1a-5903-4a02-9357-df145f028c0d/volumes/kubernetes.io~projected/kube-api-access-6x8kz major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d653e1a-5903-4a02-9357-df145f028c0d/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/3d653e1a-5903-4a02-9357-df145f028c0d/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:442 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45925a5e-41ae-4c19-b586-3151c7677612/volumes/kubernetes.io~projected/kube-api-access-tll9d:{mountpoint:/var/lib/kubelet/pods/45925a5e-41ae-4c19-b586-3151c7677612/volumes/kubernetes.io~projected/kube-api-access-tll9d major:0 minor:1003 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45925a5e-41ae-4c19-b586-3151c7677612/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/45925a5e-41ae-4c19-b586-3151c7677612/volumes/kubernetes.io~secret/default-certificate major:0 minor:1001 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45925a5e-41ae-4c19-b586-3151c7677612/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/45925a5e-41ae-4c19-b586-3151c7677612/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1002 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45925a5e-41ae-4c19-b586-3151c7677612/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/45925a5e-41ae-4c19-b586-3151c7677612/volumes/kubernetes.io~secret/stats-auth major:0 minor:1000 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4dd0fc2f-f2ee-4447-a747-04a178288cf0/volumes/kubernetes.io~projected/kube-api-access-fnw9d:{mountpoint:/var/lib/kubelet/pods/4dd0fc2f-f2ee-4447-a747-04a178288cf0/volumes/kubernetes.io~projected/kube-api-access-fnw9d major:0 minor:104 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4dd0fc2f-f2ee-4447-a747-04a178288cf0/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/4dd0fc2f-f2ee-4447-a747-04a178288cf0/volumes/kubernetes.io~secret/metrics-tls major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4e279dcc-35e2-4503-babc-978ac208c150/volumes/kubernetes.io~projected/kube-api-access-bwjz5:{mountpoint:/var/lib/kubelet/pods/4e279dcc-35e2-4503-babc-978ac208c150/volumes/kubernetes.io~projected/kube-api-access-bwjz5 major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4f9e6618-62b5-4181-b545-211461811140/volumes/kubernetes.io~projected/kube-api-access-tr9gm:{mountpoint:/var/lib/kubelet/pods/4f9e6618-62b5-4181-b545-211461811140/volumes/kubernetes.io~projected/kube-api-access-tr9gm major:0 minor:453 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/50a2046b-092b-434c-92a2-579f4462c4fb/volumes/kubernetes.io~projected/kube-api-access-mnpds:{mountpoint:/var/lib/kubelet/pods/50a2046b-092b-434c-92a2-579f4462c4fb/volumes/kubernetes.io~projected/kube-api-access-mnpds major:0 minor:751 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/50a2046b-092b-434c-92a2-579f4462c4fb/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/50a2046b-092b-434c-92a2-579f4462c4fb/volumes/kubernetes.io~secret/serving-cert major:0 minor:677 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa/volumes/kubernetes.io~projected/kube-api-access-jbwwp:{mountpoint:/var/lib/kubelet/pods/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa/volumes/kubernetes.io~projected/kube-api-access-jbwwp major:0 minor:857 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa/volumes/kubernetes.io~secret/proxy-tls major:0 minor:852 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ae41cff-0949-47f8-aae9-ae133191476d/volumes/kubernetes.io~projected/kube-api-access-mlvjp:{mountpoint:/var/lib/kubelet/pods/5ae41cff-0949-47f8-aae9-ae133191476d/volumes/kubernetes.io~projected/kube-api-access-mlvjp major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ae41cff-0949-47f8-aae9-ae133191476d/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/5ae41cff-0949-47f8-aae9-ae133191476d/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5e4f10ca-6466-4ac0-aeb7-325e40473e04/volumes/kubernetes.io~projected/kube-api-access-4xbrx:{mountpoint:/var/lib/kubelet/pods/5e4f10ca-6466-4ac0-aeb7-325e40473e04/volumes/kubernetes.io~projected/kube-api-access-4xbrx major:0 minor:1079 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5e4f10ca-6466-4ac0-aeb7-325e40473e04/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/5e4f10ca-6466-4ac0-aeb7-325e40473e04/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1077 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5e4f10ca-6466-4ac0-aeb7-325e40473e04/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/5e4f10ca-6466-4ac0-aeb7-325e40473e04/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1085 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/604456a0-4997-43bc-87ef-283a002111fe/volumes/kubernetes.io~projected/kube-api-access-8sk7j:{mountpoint:/var/lib/kubelet/pods/604456a0-4997-43bc-87ef-283a002111fe/volumes/kubernetes.io~projected/kube-api-access-8sk7j major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/604456a0-4997-43bc-87ef-283a002111fe/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/604456a0-4997-43bc-87ef-283a002111fe/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:441 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/676b054a-e76f-425d-a6ff-3f1bea8b523e/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/676b054a-e76f-425d-a6ff-3f1bea8b523e/volumes/kubernetes.io~projected/kube-api-access major:0 minor:446 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/676b054a-e76f-425d-a6ff-3f1bea8b523e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/676b054a-e76f-425d-a6ff-3f1bea8b523e/volumes/kubernetes.io~secret/serving-cert major:0 minor:102 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a42098e-4633-456f-ace7-bd3ee3bb6707/volumes/kubernetes.io~projected/kube-api-access-7mmbc:{mountpoint:/var/lib/kubelet/pods/6a42098e-4633-456f-ace7-bd3ee3bb6707/volumes/kubernetes.io~projected/kube-api-access-7mmbc major:0 minor:1005 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/747659a6-4a1e-43ed-bb8e-36da6e63b5a1/volumes/kubernetes.io~projected/kube-api-access-qxcvd:{mountpoint:/var/lib/kubelet/pods/747659a6-4a1e-43ed-bb8e-36da6e63b5a1/volumes/kubernetes.io~projected/kube-api-access-qxcvd major:0 minor:784 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/747659a6-4a1e-43ed-bb8e-36da6e63b5a1/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/747659a6-4a1e-43ed-bb8e-36da6e63b5a1/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:781 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77ef7e49-eb85-4f5e-94d3-a6a8619a6243/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/77ef7e49-eb85-4f5e-94d3-a6a8619a6243/volumes/kubernetes.io~projected/kube-api-access major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77ef7e49-eb85-4f5e-94d3-a6a8619a6243/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/77ef7e49-eb85-4f5e-94d3-a6a8619a6243/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/81f8a7d8-b6a2-4522-91d3-bb524997ed0a/volumes/kubernetes.io~projected/kube-api-access-gd6q6:{mountpoint:/var/lib/kubelet/pods/81f8a7d8-b6a2-4522-91d3-bb524997ed0a/volumes/kubernetes.io~projected/kube-api-access-gd6q6 major:0 minor:1004 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/81f8a7d8-b6a2-4522-91d3-bb524997ed0a/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/81f8a7d8-b6a2-4522-91d3-bb524997ed0a/volumes/kubernetes.io~secret/cert major:0 minor:995 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/842251bd-238a-44ba-99fc-a356503f5d16/volumes/kubernetes.io~projected/kube-api-access-9v2jm:{mountpoint:/var/lib/kubelet/pods/842251bd-238a-44ba-99fc-a356503f5d16/volumes/kubernetes.io~projected/kube-api-access-9v2jm major:0 minor:1080 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/842251bd-238a-44ba-99fc-a356503f5d16/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/842251bd-238a-44ba-99fc-a356503f5d16/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1076 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/842251bd-238a-44ba-99fc-a356503f5d16/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/842251bd-238a-44ba-99fc-a356503f5d16/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1084 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/866b0545-e232-4c80-9fb6-549d313ac3fc/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/866b0545-e232-4c80-9fb6-549d313ac3fc/volumes/kubernetes.io~secret/tls-certificates major:0 minor:999 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/87a5904a-55ca-416f-8aec-57a2b5194c5a/volumes/kubernetes.io~projected/kube-api-access-mddhv:{mountpoint:/var/lib/kubelet/pods/87a5904a-55ca-416f-8aec-57a2b5194c5a/volumes/kubernetes.io~projected/kube-api-access-mddhv major:0 minor:757 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/87a5904a-55ca-416f-8aec-57a2b5194c5a/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/87a5904a-55ca-416f-8aec-57a2b5194c5a/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:756 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/887d261f-d07f-4ef0-a230-6568f47acf4d/volumes/kubernetes.io~projected/kube-api-access-pmfxj:{mountpoint:/var/lib/kubelet/pods/887d261f-d07f-4ef0-a230-6568f47acf4d/volumes/kubernetes.io~projected/kube-api-access-pmfxj major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/887d261f-d07f-4ef0-a230-6568f47acf4d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/887d261f-d07f-4ef0-a230-6568f47acf4d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c62b15f-001a-4b64-b85f-348aefde5d1b/volumes/kubernetes.io~projected/kube-api-access-8cf2v:{mountpoint:/var/lib/kubelet/pods/8c62b15f-001a-4b64-b85f-348aefde5d1b/volumes/kubernetes.io~projected/kube-api-access-8cf2v major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c62b15f-001a-4b64-b85f-348aefde5d1b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8c62b15f-001a-4b64-b85f-348aefde5d1b/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/915aabfe-1071-4bfc-b291-424304dfe7d8/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/915aabfe-1071-4bfc-b291-424304dfe7d8/volumes/kubernetes.io~projected/ca-certs major:0 minor:429 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/915aabfe-1071-4bfc-b291-424304dfe7d8/volumes/kubernetes.io~projected/kube-api-access-n85n6:{mountpoint:/ Mar 13 12:53:46.550628 master-0 kubenswrapper[28149]: var/lib/kubelet/pods/915aabfe-1071-4bfc-b291-424304dfe7d8/volumes/kubernetes.io~projected/kube-api-access-n85n6 major:0 minor:435 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a454234a-6c8e-4916-81e8-c9e66cec9d31/volumes/kubernetes.io~projected/kube-api-access-kn8f2:{mountpoint:/var/lib/kubelet/pods/a454234a-6c8e-4916-81e8-c9e66cec9d31/volumes/kubernetes.io~projected/kube-api-access-kn8f2 major:0 minor:678 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a454234a-6c8e-4916-81e8-c9e66cec9d31/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a454234a-6c8e-4916-81e8-c9e66cec9d31/volumes/kubernetes.io~secret/serving-cert major:0 minor:443 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b12a6f33-70df-4832-ac3b-0d2b94125fbf/volumes/kubernetes.io~projected/kube-api-access-9p9dz:{mountpoint:/var/lib/kubelet/pods/b12a6f33-70df-4832-ac3b-0d2b94125fbf/volumes/kubernetes.io~projected/kube-api-access-9p9dz major:0 minor:867 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b12a6f33-70df-4832-ac3b-0d2b94125fbf/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/b12a6f33-70df-4832-ac3b-0d2b94125fbf/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:747 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bcf05594-4c10-4b54-a47c-d55e323f1f87/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/bcf05594-4c10-4b54-a47c-d55e323f1f87/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bcf05594-4c10-4b54-a47c-d55e323f1f87/volumes/kubernetes.io~projected/kube-api-access-j4hd6:{mountpoint:/var/lib/kubelet/pods/bcf05594-4c10-4b54-a47c-d55e323f1f87/volumes/kubernetes.io~projected/kube-api-access-j4hd6 major:0 minor:298 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bcf05594-4c10-4b54-a47c-d55e323f1f87/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/bcf05594-4c10-4b54-a47c-d55e323f1f87/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:449 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be89c006-0c82-4728-9c79-210303e623dc/volumes/kubernetes.io~projected/kube-api-access-dd4m8:{mountpoint:/var/lib/kubelet/pods/be89c006-0c82-4728-9c79-210303e623dc/volumes/kubernetes.io~projected/kube-api-access-dd4m8 major:0 minor:1059 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be89c006-0c82-4728-9c79-210303e623dc/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/be89c006-0c82-4728-9c79-210303e623dc/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1058 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be89c006-0c82-4728-9c79-210303e623dc/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/be89c006-0c82-4728-9c79-210303e623dc/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1054 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0f3e81c-f61d-430a-98e8-82e3b283fc73/volumes/kubernetes.io~projected/kube-api-access-65ts9:{mountpoint:/var/lib/kubelet/pods/c0f3e81c-f61d-430a-98e8-82e3b283fc73/volumes/kubernetes.io~projected/kube-api-access-65ts9 major:0 minor:394 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0f3e81c-f61d-430a-98e8-82e3b283fc73/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/c0f3e81c-f61d-430a-98e8-82e3b283fc73/volumes/kubernetes.io~secret/signing-key major:0 minor:393 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c4477be6-bcff-407a-8033-b005e19bf5d6/volumes/kubernetes.io~projected/kube-api-access-d4q4x:{mountpoint:/var/lib/kubelet/pods/c4477be6-bcff-407a-8033-b005e19bf5d6/volumes/kubernetes.io~projected/kube-api-access-d4q4x major:0 minor:627 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c4477be6-bcff-407a-8033-b005e19bf5d6/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/c4477be6-bcff-407a-8033-b005e19bf5d6/volumes/kubernetes.io~secret/encryption-config major:0 minor:523 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c4477be6-bcff-407a-8033-b005e19bf5d6/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/c4477be6-bcff-407a-8033-b005e19bf5d6/volumes/kubernetes.io~secret/etcd-client major:0 minor:626 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c4477be6-bcff-407a-8033-b005e19bf5d6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c4477be6-bcff-407a-8033-b005e19bf5d6/volumes/kubernetes.io~secret/serving-cert major:0 minor:625 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c642c18f-f960-4418-bcb7-df884f8f8ad5/volumes/kubernetes.io~projected/kube-api-access-8t2jl:{mountpoint:/var/lib/kubelet/pods/c642c18f-f960-4418-bcb7-df884f8f8ad5/volumes/kubernetes.io~projected/kube-api-access-8t2jl major:0 minor:312 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce3a655a-0684-4bc5-ac36-5878507537c7/volumes/kubernetes.io~projected/kube-api-access-vgbvr:{mountpoint:/var/lib/kubelet/pods/ce3a655a-0684-4bc5-ac36-5878507537c7/volumes/kubernetes.io~projected/kube-api-access-vgbvr major:0 minor:103 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a/volumes/kubernetes.io~projected/kube-api-access-m4tnq:{mountpoint:/var/lib/kubelet/pods/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a/volumes/kubernetes.io~projected/kube-api-access-m4tnq major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d39ee5d7-840e-4481-b0b9-baf34da2c7b1/volumes/kubernetes.io~projected/kube-api-access-rvrc7:{mountpoint:/var/lib/kubelet/pods/d39ee5d7-840e-4481-b0b9-baf34da2c7b1/volumes/kubernetes.io~projected/kube-api-access-rvrc7 major:0 minor:750 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d39ee5d7-840e-4481-b0b9-baf34da2c7b1/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/d39ee5d7-840e-4481-b0b9-baf34da2c7b1/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:738 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3d998ee-b26f-4e30-83bc-f94f8c68060a/volumes/kubernetes.io~projected/kube-api-access-x5nb7:{mountpoint:/var/lib/kubelet/pods/d3d998ee-b26f-4e30-83bc-f94f8c68060a/volumes/kubernetes.io~projected/kube-api-access-x5nb7 major:0 minor:294 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3d998ee-b26f-4e30-83bc-f94f8c68060a/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/d3d998ee-b26f-4e30-83bc-f94f8c68060a/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:450 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d44112d1-b2a5-4b8d-b74d-1e91638508d5/volumes/kubernetes.io~projected/kube-api-access-tdlrq:{mountpoint:/var/lib/kubelet/pods/d44112d1-b2a5-4b8d-b74d-1e91638508d5/volumes/kubernetes.io~projected/kube-api-access-tdlrq major:0 minor:763 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d44112d1-b2a5-4b8d-b74d-1e91638508d5/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/d44112d1-b2a5-4b8d-b74d-1e91638508d5/volumes/kubernetes.io~secret/cert major:0 minor:755 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f/volumes/kubernetes.io~projected/kube-api-access-mkvfp:{mountpoint:/var/lib/kubelet/pods/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f/volumes/kubernetes.io~projected/kube-api-access-mkvfp major:0 minor:778 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f/volumes/kubernetes.io~secret/proxy-tls major:0 minor:780 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d5a19b80-d488-46d3-a4a8-0b80361077e1/volumes/kubernetes.io~projected/kube-api-access-p8hcd:{mountpoint:/var/lib/kubelet/pods/d5a19b80-d488-46d3-a4a8-0b80361077e1/volumes/kubernetes.io~projected/kube-api-access-p8hcd major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d5a19b80-d488-46d3-a4a8-0b80361077e1/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/d5a19b80-d488-46d3-a4a8-0b80361077e1/volumes/kubernetes.io~secret/srv-cert major:0 minor:472 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d5f63b6b-990a-444b-a954-d718036f2f6c/volumes/kubernetes.io~projected/kube-api-access-rw27v:{mountpoint:/var/lib/kubelet/pods/d5f63b6b-990a-444b-a954-d718036f2f6c/volumes/kubernetes.io~projected/kube-api-access-rw27v major:0 minor:783 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d5f63b6b-990a-444b-a954-d718036f2f6c/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/d5f63b6b-990a-444b-a954-d718036f2f6c/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:773 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volumes/kubernetes.io~projected/kube-api-access-4j5fc:{mountpoint:/var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volumes/kubernetes.io~projected/kube-api-access-4j5fc major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7d67915-d31e-46dc-bb2e-1a6f689dd875/volumes/kubernetes.io~projected/kube-api-access-69hws:{mountpoint:/var/lib/kubelet/pods/d7d67915-d31e-46dc-bb2e-1a6f689dd875/volumes/kubernetes.io~projected/kube-api-access-69hws major:0 minor:764 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7d67915-d31e-46dc-bb2e-1a6f689dd875/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/d7d67915-d31e-46dc-bb2e-1a6f689dd875/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:753 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71/volumes/kubernetes.io~projected/kube-api-access-cscql:{mountpoint:/var/lib/kubelet/pods/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71/volumes/kubernetes.io~projected/kube-api-access-cscql major:0 minor:649 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e25bef76-7020-4f86-8dee-a58ebed537d2/volumes/kubernetes.io~projected/kube-api-access-r8gcb:{mountpoint:/var/lib/kubelet/pods/e25bef76-7020-4f86-8dee-a58ebed537d2/volumes/kubernetes.io~projected/kube-api-access-r8gcb major:0 minor:982 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e25bef76-7020-4f86-8dee-a58ebed537d2/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/e25bef76-7020-4f86-8dee-a58ebed537d2/volumes/kubernetes.io~secret/proxy-tls major:0 minor:941 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118/volumes/kubernetes.io~projected/kube-api-access major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef42b65e-2d92-46ac-baaf-30e213787781/volumes/kubernetes.io~projected/kube-api-access-xxjbd:{mountpoint:/var/lib/kubelet/pods/ef42b65e-2d92-46ac-baaf-30e213787781/volumes/kubernetes.io~projected/kube-api-access-xxjbd major:0 minor:630 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef42b65e-2d92-46ac-baaf-30e213787781/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/ef42b65e-2d92-46ac-baaf-30e213787781/volumes/kubernetes.io~secret/metrics-tls major:0 minor:640 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0803181-4e37-43fa-8ddc-9c76d3f61817/volumes/kubernetes.io~projected/kube-api-access-lwkdj:{mountpoint:/var/lib/kubelet/pods/f0803181-4e37-43fa-8ddc-9c76d3f61817/volumes/kubernetes.io~projected/kube-api-access-lwkdj major:0 minor:301 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0803181-4e37-43fa-8ddc-9c76d3f61817/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f0803181-4e37-43fa-8ddc-9c76d3f61817/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f31565e2-c211-4d28-8bbc-d7a951023a8b/volumes/kubernetes.io~projected/kube-api-access-kwk62:{mountpoint:/var/lib/kubelet/pods/f31565e2-c211-4d28-8bbc-d7a951023a8b/volumes/kubernetes.io~projected/kube-api-access-kwk62 major:0 minor:409 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5775266-5e58-44ed-81cb-dfe3faf38add/volumes/kubernetes.io~projected/kube-api-access-9q2qc:{mountpoint:/var/lib/kubelet/pods/f5775266-5e58-44ed-81cb-dfe3faf38add/volumes/kubernetes.io~projected/kube-api-access-9q2qc major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5775266-5e58-44ed-81cb-dfe3faf38add/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f5775266-5e58-44ed-81cb-dfe3faf38add/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6992fed-b472-4a2d-a376-c5d72aa846d4/volumes/kubernetes.io~projected/kube-api-access-4n75n:{mountpoint:/var/lib/kubelet/pods/f6992fed-b472-4a2d-a376-c5d72aa846d4/volumes/kubernetes.io~projected/kube-api-access-4n75n major:0 minor:777 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6992fed-b472-4a2d-a376-c5d72aa846d4/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/f6992fed-b472-4a2d-a376-c5d72aa846d4/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:779 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6992fed-b472-4a2d-a376-c5d72aa846d4/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/f6992fed-b472-4a2d-a376-c5d72aa846d4/volumes/kubernetes.io~secret/webhook-cert major:0 minor:772 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:496 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8/volumes/kubernetes.io~empty-dir/tmp major:0 minor:520 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8/volumes/kubernetes.io~projected/kube-api-access-p6h9f:{mountpoint:/var/lib/kubelet/pods/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8/volumes/kubernetes.io~projected/kube-api-access-p6h9f major:0 minor:511 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc192c03-5aec-4507-a702-56bf98c96e9c/volumes/kubernetes.io~projected/kube-api-access-c69h2:{mountpoint:/var/lib/kubelet/pods/fc192c03-5aec-4507-a702-56bf98c96e9c/volumes/kubernetes.io~projected/kube-api-access-c69h2 major:0 minor:1141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc192c03-5aec-4507-a702-56bf98c96e9c/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/fc192c03-5aec-4507-a702-56bf98c96e9c/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc192c03-5aec-4507-a702-56bf98c96e9c/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/fc192c03-5aec-4507-a702-56bf98c96e9c/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc192c03-5aec-4507-a702-56bf98c96e9c/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/fc192c03-5aec-4507-a702-56bf98c96e9c/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1140 fsType:tmpfs blockSize:0} overlay_0-1014:{mountpoint:/var/lib/containers/storage/overlay/7fa40d1fd6117a40b014d9d522705cd29bd4058d27d964ea4df1f572d3e721dd/merged major:0 minor:1014 fsType:overlay blockSize:0} overlay_0-1016:{mountpoint:/var/lib/containers/storage/overlay/0ca8b5b90f2abb8a8d92bb76b86603393b1b2adfc3e7aa02b8be24e4f6a03107/merged major:0 minor:1016 fsType:overlay blockSize:0} overlay_0-1018:{mountpoint:/var/lib/containers/storage/overlay/3d5598cfc93a1cc705ba7ca73e9660fbe8c486e2b4c1e3835dd180eee14a47f6/merged major:0 minor:1018 fsType:overlay blockSize:0} overlay_0-1020:{mountpoint:/var/lib/containers/storage/overlay/106f0a77ffef987f83a0b055e87277c228971b2b718d2f85204c15a08c491fb4/merged major:0 minor:1020 fsType:overlay blockSize:0} overlay_0-1022:{mountpoint:/var/lib/containers/storage/overlay/7e4478d0d46549c2ce0f8f6f44e769f85f1f02f1817abbb7f080293717d9e0cc/merged major:0 minor:1022 fsType:overlay blockSize:0} overlay_0-1027:{mountpoint:/var/lib/containers/storage/overlay/6cb000d926cb728be75cb5b60f5fd05a6d67dff183752febe05bf9a3bd521f6e/merged major:0 minor:1027 fsType:overlay blockSize:0} overlay_0-1030:{mountpoint:/var/lib/containers/storage/overlay/1eca20b32d5ac2dab74034a453a6b4186131e092286a33d411e2240e2f5db6b7/merged major:0 minor:1030 fsType:overlay blockSize:0} overlay_0-1050:{mountpoint:/var/lib/containers/storage/overlay/641b67e66bb86caa09a1a8e76cd8bc1739165de60bde96787ee8b36f01c3e650/merged major:0 minor:1050 fsType:overlay blockSize:0} overlay_0-1052:{mountpoint:/var/lib/containers/storage/overlay/bcc98c2e8d7d0110da3908f87f154ef56b0e350bb836ef248e4c9d999b99cc59/merged major:0 minor:1052 fsType:overlay blockSize:0} overlay_0-1062:{mountpoint:/var/lib/containers/storage/overlay/4aa92229176ddc57a9d3e77a5462b88d69a22b2f8d10052fb9e18e4a7bdbeaf3/merged major:0 minor:1062 fsType:overlay blockSize:0} overlay_0-1064:{mountpoint:/var/lib/containers/storage/overlay/f721fc758576d88064b6484b62017fe197b982ba894fe0ef6e294f09181c1870/merged major:0 minor:1064 fsType:overlay blockSize:0} overlay_0-1066:{mountpoint:/var/lib/containers/storage/overlay/b5347b90e69c781e337a155ccc9deb0c613bf8c8a6681069dc31b3f314ea0e77/merged major:0 minor:1066 fsType:overlay blockSize:0} overlay_0-107:{mountpoint:/var/lib/containers/storage/overlay/2e20967f955dc17b81fb0fd2c7d0fdd1a3bd0b7fe7919562d47cfdf1c031f722/merged major:0 minor:107 fsType:overlay blockSize:0} overlay_0-1088:{mountpoint:/var/lib/containers/storage/overlay/8be1bc1c2e4991355e1c93df6d8985771f7880873f98a939b37654acb5485d99/merged major:0 minor:1088 fsType:overlay blockSize:0} overlay_0-1092:{mountpoint:/var/lib/containers/storage/overlay/4f46ebc3a596e1e7d1423ed8eec14f621a1e43a76944a11ba4062ca3222a1629/merged major:0 minor:1092 fsType:overlay blockSize:0} overlay_0-1094:{mountpoint:/var/lib/containers/storage/overlay/e0d284d40753bd3116b81c28e8ce9f63ff7dd10f842700404535955a7b3db6c3/merged major:0 minor:1094 fsType:overlay blockSize:0} overlay_0-1096:{mountpoint:/var/lib/containers/storage/overlay/1c1e5cf2c8ae36dd1d19d55eb506b68898558f6e3006a64fe52b45efd79d6bf4/merged major:0 minor:1096 fsType:overlay blockSize:0} overlay_0-1102:{mountpoint:/var/lib/containers/storage/overlay/415bf99a0989d0d896e2a1e1301807bdc0dd20cd6f700dbc61dae79f74b4cc0e/merged major:0 minor:1102 fsType:overlay blockSize:0} overlay_0-1107:{mountpoint:/var/lib/containers/storage/overlay/62b8b9a78c11a1bd3401a63e5f747290b11737a65c58f728634a437f3c051ee6/merged major:0 minor:1107 fsType:overlay blockSize:0} overlay_0-1109:{mountpoint:/var/lib/containers/storage/overlay/e60a153097aa4f4a72b9b404409aea33e1285d48ba4da712b39967d8a9871bc0/merged major:0 minor:1109 fsType:overlay blockSize:0} overlay_0-111:{mountpoint:/var/lib/containers/storage/overlay/e1a5e4a7e8219ade54c4cd7205ee7702377e47b938ab01de7ed63fc320fdffe8/merged major:0 minor:111 fsType:overlay blockSize:0} overlay_0-1111:{mountpoint:/var/lib/containers/storage/overlay/481eb7b6b4704dce43d9300cef4fc4dc6e88deca4d15122bf7dcbde65e36a05f/merged major:0 minor:1111 fsType:overlay blockSize:0} overlay_0-1120:{mountpoint:/var/lib/containers/storage/overlay/26e1d5edd5089708f4bb50855b839509e11c4f575b3b0f999df975ebd0f8e80a/merged major:0 minor:1120 fsType:overlay blockSize:0} overlay_0-1125:{mountpoint:/var/lib/containers/storage/overlay/1b01f4678e2c6aacbee32eab46140f86d1dbd61715b3c3160c8c7e3d6c60f2f8/merged major:0 minor:1125 fsType:overlay blockSize:0} overlay_0-1126:{mountpoint:/var/lib/containers/storage/overlay/095c6a8ad7c5e676e7c6f9c4ef646a3067208d582cb9443e5a30bb9e9fe56ab4/merged major:0 minor:1126 fsType:overlay blockSize:0} overlay_0-1144:{mountpoint:/var/lib/containers/storage/overlay/447a4aa352cdb6ecf96e829ae7f3b04600b7c3631aed2b63b7c7156abb1d26cd/merged major:0 minor:1144 fsType:overlay blockSize:0} overlay_0-1146:{mountpoint:/var/lib/containers/storage/overlay/f69c49ab8e7b12948c1c54ca1f654d738dcd46fbb7cc4678ef4f73f317b9a8dd/merged major:0 minor:1146 fsType:overlay blockSize:0} overlay_0-1153:{mountpoint:/var/lib/containers/storage/overlay/e157cf88470f600f83d0316d11a06302d6ed650a50a58e14c32391e38a30e060/merged major:0 minor:1153 fsType:overlay blockSize:0} overlay_0-1156:{mountpoint:/var/lib/containers/storage/overlay/366aa92f4edc8f746a77bf4a846624289d2031a2afcc996586adb146b3b8d7c9/merged major:0 minor:1156 fsType:overlay blockSize:0} overlay_0-1158:{mountpoint:/var/lib/containers/storage/overlay/93ef2acad431f0912ad7cc7d71f23def45c03201c73a9e902543f3d2944a484e/merged major:0 minor:1158 fsType:overlay blockSize:0} overlay_0-1159:{mountpoint:/var/lib/containers/storage/overlay/a160318f12b2a20a99f083fb6fd25acbaa46ae42d531c072a6cbde826d7a062c/merged major:0 minor:1159 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/58b107a20fa41c1eabf0c5e22c2698993bac076538fb5cc87143f5db0da0b009/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-1161:{mountpoint:/var/lib/containers/storage/overlay/a9bc2540a6aa49b379f0033dc5fbfe68841c06bac0427fb029d40581d3a1b51a/merged major:0 minor:1161 fsType:overlay blockSize:0} overlay_0-1165:{mountpoint:/var/lib/containers/storage/overlay/0fa918abe3fcbb1ae9c6598f310e0372d1c6456c32e3bf7947f331bd81dfbf7e/merged major:0 minor:1165 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/280a771642d90d767e5a3f587ef07c816633906580fb44cdc98274d75a80d9cb/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-1185:{mountpoint:/var/lib/containers/storage/overlay/b7e2e8e736abcec7ce201ac91d4a142e071de9c8d8a727bb7a2aafd7c6a848db/merged major:0 minor:1185 fsType:overlay blockSize:0} overlay_0-1191:{mountpoint:/var/lib/containers/storage/overlay/71c677e214c396ee185bd4f30ed2d4ca707ab91994d67e1d4bb2dc89a5b1cb07/merged major:0 minor:1191 fsType:overlay blockSize:0} overlay_0-1197:{mountpoint:/var/lib/containers/storage/overlay/04befbc474252546413b4fa5cb3109d005440913cf7d7a6d33d094ee7c9315ff/merged major:0 minor:1197 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/f1d57f788e37ef5c0e479319625b466d3e91514b0832a7d9092dd06ac8c13183/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/ed8be71c5b5603cd6326e578c5e816d938fec664fa9b8276a9ea50c6b0d2bd63/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/92891694cb609e66dd1b6dab387e2bd26eb240246c90f46af94477fe31145696/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/cc877fde7f31fc5e1cbeb6e1720f4cded636aea9ffce928b6f8f1ad54dfa4bef/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/32a015fcb0f3b86479f20b31c219510bd81633f65ed4c1bc5928379aa6014692/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-142:{mountpoint:/var/lib/containers/storage/overlay/917e007a1d7671d203e3ff2802697fcc5ae819981ba4cde9010d57b643423205/merged major:0 minor:142 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/30dda23e81937d403326de7acfd8728bc31009bebeafe86e2f25766ca11a77f7/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-151:{mountpoint:/var/lib/containers/storage/overlay/6376c418e280ed1680d3b051f88ad1f74590648e0e4e7b55b26bbf4d998a8fd2/merged major:0 minor:151 fsType:overlay blockSize:0} overlay_0-153:{mountpoint:/var/lib/containers/storage/overlay/3fca95897ad9f7571a7aeab5e6c9261a5c88a5d09887fab27abd2501625e1681/merged major:0 minor:153 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/1cbb429ef96723b5cf42252acc5f5f5b9bab63141ec60f3305acdb0e807ba787/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-160:{mountpoint:/var/lib/containers/storage/overlay/d445aaf8f256a6968524045c854f6d590df3c9e79f7df4fe1337e4a9fd8e9f3c/merged major:0 minor:160 fsType:overlay blockSize:0} overlay_0-162:{mountpoint:/var/lib/containers/storage/overlay/13e5f061a453960757532a9372951b094d7f0a98042bbcc4ad8acb6aae50db6a/merged major:0 minor:162 fsType:overlay blockSize:0} overlay_0-165:{mountpoint:/var/lib/containers/storage/overlay/c7e7b93c28703e4a10fdc256fe66d2e81cbe6f78f0bdbb05300ae268e82c1005/merged major:0 minor:165 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/a511d4e530bae2ea7098b77a49806ce53b838feb7974fa6f0484cfe286ec3208/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/cef9a59a3746f747de2392e7a97329114e96f38de9e8c8d3124cdadb09f17578/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/665af69a019334a47287bba3b8cde3e58d4c9c66f1ef7473e12a80bf336c6342/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/40f978e05b5e73f972dc067d6ff8509e2d7e989cc0468ccaaa09583ef2a945c1/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/c37a231a38815630c08e4ace5fbd4143e34bfbbc88cf5e8853a85dc21308d166/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/3a2bf2da98d53d69d5d830e3f5b340a7cf2ee2b2169e35cdc23a9fab508baf7a/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/7b27df09205a2be77819040a90c82da640712121a65d1c35a2797016e6b8f6e7/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-261:{mountpoint:/var/lib/containers/storage/overlay/13ba0ddbcf6ff704e81c9b6ee7d79d60c7f86e80636650977e2ca49fca1e98cb/merged major:0 minor:261 fsType:overlay blockSize:0} overlay_0-264:{mountpoint:/var/lib/containers/storage/overlay/83efe2b653a10c507af6a3ca04d802921b3bc306bda0d4ac262aeda23cef02ec/merged major:0 minor:264 fsType:overlay blockSize:0} overlay_0-267:{mountpoint:/var/lib/containers/storage/overlay/188bc1bb25c3329d69e0b2ba9ba680434c270e1c1b6f3dbcd2464553cbaa86f2/merged major:0 minor:267 fsType:overlay blockSize:0} overlay_0-269:{mountpoint:/var/lib/containers/storage/overlay/ef8db0ef29fd46395bf6e3a4e90239c04edd0e99000425614f04a2a030ff44d3/merged major:0 minor:269 fsType:overlay blockSize:0} overlay_0-273:{mountpoint:/var/lib/containers/storage/overlay/6c9c2015aafcee37e6cd06805941be811b2d987cbfb8153e130383d3f03f3b68/merged major:0 minor:273 fsType:overlay blockSize:0} overlay_0-275:{mountpoint:/var/lib/containers/storage/overlay/a523ce52b43121deda8adacadb4793f75c6619dbb6dc3df0398ceec1528a6ec6/merged major:0 minor:275 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/f6446735e2f517b4ad9cef6b863e7473947bfaba6d5fd64224c45a1a8f81aeb3/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/a191ec8d75040b4bd0e25736a264b345dadb1622055e3c28f0d15effeb7c2cc6/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/befe27508365f7050634ba2bfdc53f83973c31050528d191653465a81c1106be/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/6c37a5368bd943cda5e0b5b3e3793e2d05cfc6a0c7daf4bb43b0bdac2503e4b5/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-286:{mountpoint:/var/lib/containers/storage/overlay/6dc8f369eb0d5430083c3e5aa8591a643a949551790b101c1d7ec4cc7776cfef/merged major:0 minor:286 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/4ab9dd7348814840bbeecb872752be3b29b6ec6b55dde6c134ee08991c6ed5ad/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-290:{mountpoint:/var/lib/containers/storage/overlay/257495167ebf6ac3725cd336f93c2d2a0064dc7c0330dfc35755c8fa3558ad05/merged major:0 minor:290 fsType:overlay blockSize:0} overlay_0-292:{mountpoint:/var/lib/containers/storage/overlay/41c76fceb2620c28c43fe673dbfb6548a181264f01adad13a5d635edf2088f2f/merged major:0 minor:292 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/52cc66ee674d2377b9d5c2856d59c5b71aff76b47a7cc8a57e1b100114e5e526/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/e90c90f7997a66aa487abd64e7bd3603979c19989ca3ae0c43d3bd647dc71c96/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-310:{mountpoint:/var/lib/containers/storage/overlay/3ab68b84e8fbe272bb97edb09caf0e8de15d05d3569b3e171b89460c99309852/merged major:0 minor:310 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/6beaeb3c937b85e6d53ce3bb24834f933aa833ba2f7990f074310852745920c8/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-318:{mountpoint:/var/lib/containers/storage/overlay/30128cd2bd74d9d232ac0903942c60c182d7be0747fd6f02a42cf2811fc1dc3a/merged major:0 minor:318 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/ad0784704b3bc4656a3e1da16d7c1d07097978fedcf3cc051d027518d32963cd/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-322:{mountpoint:/var/lib/containers/storage/overlay/a5b60f72e3741f38a4185cd17f803400389fa40829df1568c411ee07136cc18c/merged major:0 minor:322 fsType:overlay blockSize:0} overlay_0-330:{mountpoint:/var/lib/containers/storage/overlay/4e9ebe4dd32d7442e4fc6c97869cd532f8ca5fbac7c4f66dfba795c8eb65af56/merged major:0 minor:330 fsType:overlay blockSize:0} overlay_0-339:{mountpoint:/var/lib/containers/storage/overlay/668348ede1e4143ee2c171a54b48a5b2a9947e07e33de4c4d87051bc3bf441ad/merged major:0 minor:339 fsType:overlay blockSize:0} overlay_0-353:{mountpoint:/var/lib/containers/storage/overlay/b9f107555d9cb4b6747349467336cbe7f0b28cf08f0da7509423935a47bdae63/merged major:0 minor:353 fsType:overlay blockSize:0} overlay_0-354:{mountpoint:/var/lib/containers/storage/overlay/7e9e5b2219993a47fc0319a9b3a0d0042ee133735570f1958cb4e5516198adf5/merged major:0 minor:354 fsType:overlay blockSize:0} overlay_0-360:{mountpoint:/var/lib/containers/storage/overlay/339c891cea115308c3eae8ce745df8880162bc1d1d8ac11dbc59ce84faee5954/merged major:0 minor:360 fsType:overlay blockSize:0} overlay_0-362:{mountpoint:/var/lib/containers/storage/overlay/bdfb2e4bf067a3b34eaf9c6c98d098a59ee14fcd974290bc2972d295b3018785/merged major:0 minor:362 fsType:overlay blockSize:0} overlay_0-365:{mountpoint:/var/lib/containers/storage/overlay/d5aba14220dfd69f188e4ddc7abb8d218732b2871c8a3300021f242299a83a86/merged major:0 minor:365 fsType:overlay blockSize:0} overlay_0-366:{mountpoint:/var/lib/containers/storage/overlay/c1ef2145f3ba0b9dc9e75d02c923b8300060afefbab18117e2e9d7cec5a44b86/merged major:0 minor:366 fsType:overlay blockSize:0} overlay_0-374:{mountpoint:/var/lib/containers/storage/overlay/b6c2df131793e788148acbd1b8e616d30da1dfeab88a6a5e6bd4b96d84506a8a/merged major:0 minor:374 fsType:overlay blockSize:0} overlay_0-377:{mountpoint:/var/lib/containers/storage/overlay/67d8ef0182673066e5ead59bb55921c5ff62591beb37a1a74dd74dfa4bf342fa/merged major:0 minor:377 fsType:overlay blockSize:0} overlay_0-382:{mountpoint:/var/lib/containers/storage/overlay/dc9cf89de273631d7ad8294ae757711085e49f4412f8b4312eb71ecf78f6ea65/merged major:0 minor:382 fsType:overlay blockSize:0} overlay_0-387:{mountpoint:/var/lib/containers/storage/overlay/ccf98d7f056f6e07c352509809866ddb969f967d0f24549e0d1b0eaf7f5cf26d/merged major:0 minor:387 fsType:overlay blockSize:0} overlay_0-397:{mountpoint:/var/lib/containers/storage/overlay/30b8081251bd10d384f12cdb7aab8cf3d7db31de89c8e906540f62917e3510c6/merged major:0 minor:397 fsType:overlay blockSize:0} overlay_0-399:{mountpoint:/var/lib/containers/storage/overlay/3fa09a9f8c3127710a8392e4409b794906e1a7d59c2c879b400823972c9a8c7e/merged major:0 minor:399 fsType:overlay blockSize:0} overlay_0-403:{mountpoint:/var/lib/containers/storage/overlay/5dbf870446807d95f5bec2484120a5826ab30658873054162d75d61c986f8221/merged major:0 minor:403 fsType:overlay blockSize:0} overlay_0-41:{mountpoint:/var/lib/containers/storage/overlay/68cb184fd815cd706dbd76773975ba02ffb7d793893f845e91470e44d4f65287/merged major:0 minor:41 fsType:overlay blockSize:0} overlay_0-414:{mountpoint:/var/lib/containers/storage/overlay/b24a8ee1fe42cb79aac9bd3e4cefe68a983bfd8ebe4e1fef08cfbe6e54c74e88/merged major:0 minor:414 fsType:overlay blockSize:0} overlay_0-416:{mountpoint:/var/lib/containers/storage/overlay/60b8320e41ac6c9e3a7e9e5eecd97d13e57b07736c62c0bf1f55260f10f39ff7/merged major:0 minor:416 fsType:overlay blockSize:0} overlay_0-418:{mountpoint:/var/lib/containers/storage/overlay/68da301291d9696e3986156ec08d69d2975b1e0abd7c5e546c4fee69c0cc5f4c/merged major:0 minor:418 fsType:overlay blockSize:0} overlay_0-426:{mountpoint:/var/lib/containers/storage/overlay/a4c1b987c31fba5c04575efe31680f0fa825f0ad565403d325aac986f10b44a7/merged major:0 minor:426 fsType:overlay blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/478211bd19761919254bb622c8b59161c3e2ea2f7e3fb0e3dfaa62c6fcf6366d/merged major:0 minor:43 fsType:overlay blockSize:0} overlay_0-45:{mountpoint:/var/lib/containers/storage/overlay/c3ce6ac1e456a7eeec2e0413f1e2b1991fbeb2a478dd7e6b34cd1ae8831ddf16/merged major:0 minor:45 fsType:overlay blockSize:0} overlay_0-464:{mountpoint:/var/lib/containers/storage/overlay/b641f7c70e769d3eb9fb253772fe108c2e53f59412e6c90c95b3eeb3f990c8c7/merged major:0 minor:464 fsType:overlay blockSize:0} overlay_0-47:{mountpoint:/var/lib/containers/storage/overlay/ab3c4a4f4ad22194dffaf480a175ea5c8ae5a27918f9e9d91c1cc5b4c9e5549b/merged major:0 minor:47 fsType:overlay blockSize:0} overlay_0-470:{mountpoint:/var/lib/containers/storage/overlay/3091a578b7c5c928764498683097fcac3f5f2c8d8c0b1c1f093df7aafdb5fd7b/merged major:0 minor:470 fsType:overlay blockSize:0} overlay_0-476:{mountpoint:/var/lib/containers/storage/overlay/ab603c298ace9cd9350752a3440933cc8e809b281329677c8b6a640a48e601be/merged major:0 minor:476 fsType:overlay blockSize:0} overlay_0-478:{mountpoint:/var/lib/containers/storage/overlay/d9133f1f176e1554a4081e4d4caac27a4243ad8e02f4e4b3eb19635ec10e7da3/merged major:0 minor:478 fsType:overlay blockSize:0} overlay_0-482:{mountpoint:/var/lib/containers/storage/overlay/03b4441584c892b0125ccf04b46934bdb122b70166450c5b6cd8e09de5f0105b/merged major:0 minor:482 fsType:overlay blockSize:0} overlay_0-484:{mountpoint:/var/lib/containers/storage/overlay/7e2543e9dc5e4975540b5c3d21c02526c4ba281f34cc427cc08ca8bc7fe2aff3/merged major:0 minor:484 fsType:overlay blockSize:0} overlay_0-485:{mountpoint:/var/lib/containers/storage/overlay/a3f237420af26bcc416c9406127eee85b8fd8108b201db8750bbd92e56369242/merged major:0 minor:485 fsType:overlay blockSize:0} overlay_0-49:{mountpoint:/var/lib/containers/storage/overlay/ce93b119f8e76b2b9cd795fbb01c98c9b0d8a32e961fd3fe7a68e08489aeffcd/merged major:0 minor:49 fsType:overlay blockSize:0} overlay_0-499:{mountpoint:/var/lib/containers/storage/overlay/a1ee74736003419280065df422162e397d2fb6ef9da8fb50d463be77f274cc2d/merged major:0 minor:499 fsType:overlay blockSize:0} overlay_0-509:{mountpoint:/var/lib/containers/storage/overlay/bd2e1f7237063637faaa0510bb9b25f7e1f0b593dd640f34483d5b87edfe049d/merged major:0 minor:509 fsType:overlay blockSize:0} overlay_0-516:{mountpoint:/var/lib/containers/storage/overlay/82322a5d5421e75ef66290ef00165ba9da6ef04944756fc9739e1779241ddf71/merged major:0 minor:516 fsType:overlay blockSize:0} overlay_0-518:{mountpoint:/var/lib/containers/storage/overlay/fb7882077b1e894887df52a7cd04c0b07a800b4f0cbd94f3f54813bdf6e329a3/merged major:0 minor:518 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/bfaca44bc14fee169a996e444985d168af638c9803a281635ac77812283d3a6d/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-521:{mountpoint:/var/lib/containers/storage/overlay/66dee555601c204e85c09367fcd2fa0d064f71330560a010ca6e73e367578b85/merged major:0 minor:521 fsType:overlay blockSize:0} overlay_0-534:{mountpoint:/var/lib/containers/storage/overlay/c899162feb395bbfd450f2bd3415bf49fa7a5b6a28b5cbbc87ad39b02523351a/merged major:0 minor:534 fsType:overlay blockSize:0} overlay_0-55:{mountpoint:/var/lib/containers/storage/overlay/ad907970b9499bb8570001f39f13b0efa4a773e053a9ac93660fc36ef340fa42/merged major:0 minor:55 fsType:overlay blockSize:0} overlay_0-550:{mountpoint:/var/lib/containers/storage/overlay/5a967f1ffea4565d27780be509104aaf24dbb50ff3ad4b7d5349103108593af3/merged major:0 minor:550 fsType:overlay blockSize:0} overlay_0-552:{mountpoint:/var/lib/containers/storage/overlay/e588330747f110526bf77933317543af0633049e33f3a9f2a96758857e27f8af/merged major:0 minor:552 fsType:overlay blockSize:0} overlay_0-556:{mountpoint:/var/lib/containers/storage/overlay/9c9e05bf153707f02994cdbc668a67be396f6ef848a1175c63a56e392995853e/merged major:0 minor:556 fsType:overlay blockSize:0} overlay_0-565:{mountpoint:/var/lib/containers/storage/overlay/30c72e83f0467dc8c07a5c4026ad97ef3bbee36fafa71b4a75d90da11aa2eac0/merged major:0 minor:565 fsType:overlay blockSize:0} overlay_0-573:{mountpoint:/var/lib/containers/storage/overlay/a9b801291cd72d653921fd6ff8e11563f2e577bfd8fbee5bacb0a5a6718ffc09/merged major:0 minor:573 fsType:overlay blockSize:0} overlay_0-576:{mountpoint:/var/lib/containers/storage/overlay/9efe29ae845d0d47113da7d1515e850542d9e8bc9034cc883ee3673bb7f03d80/merged major:0 minor:576 fsType:overlay blockSize:0} overlay_0-578:{mountpoint:/var/lib/containers/storage/overlay/2bdd077e4e8f855c07ea28f0e53cb157143aecec155c8eeeb437578f28f53095/merged major:0 minor:578 fsType:overlay blockSize:0} overlay_0-583:{mountpoint:/var/lib/containers/storage/overlay/b9c81981d42f08a2c40c6d3f09487734466f9931aec90e58ed3e2fdb5a7a8d4b/merged major:0 minor:583 fsType:overlay blockSize:0} overlay_0-588:{mountpoint:/var/lib/containers/storage/overlay/12f2fc41f87053b1941b868215b1ee915eee1206e881651c5cb77a8c5938b98b/merged major:0 minor:588 fsType:overlay blockSize:0} overlay_0-589:{mountpoint:/var/lib/containers/storage/overlay/e202d6823eee5bf53843913a2e7b215600a6b540c89869269742cfc87163f8a0/merged major:0 minor:589 fsType:overlay blockSize:0} overlay_0-593:{mountpoint:/var/lib/containers/storage/overlay/c84f5bf5c4c3ab6e228024369779bf9470ce8a32317795b4d9e589abb250bfe6/merged major:0 minor:593 fsType:overlay blockSize:0} overlay_0-595:{mountpoint:/var/lib/containers/storage/overlay/a1ec66039108c22bb3b28cd80655048758922a9c7c093284d82658aa556bf710/merged major:0 minor:595 fsType:overlay blockSize:0} overlay_0-598:{mountpoint:/var/lib/containers/storage/overlay/1bcf1510641fafc1b0b22c43fd883773bb38945bc57e1d6a2b3e207f2f0ffee8/merged major:0 minor:598 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/a497fa913e710b3b8670c469e6fc8ab93225ea9d88e583ed2c7fee2c7d322ca8/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-601:{mountpoint:/var/lib/containers/storage/overlay/6e9aa5bec48b25ca9238d7316f9aef46e595eec3d33e0ca2da6934121078bdd9/merged major:0 minor:601 fsType:overlay blockSize:0} overlay_0-620:{mountpoint:/var/lib/containers/storage/overlay/8ae1586205abf5c635caa501793e9c3d6e930ad7cb2471583cd1048e7f6ec3ab/merged major:0 minor:620 fsType:overlay blockSize:0} overlay_0-624:{mountpoint:/var/lib/containers/storage/overlay/020ab89d922362afc81c38f74226997613ff5517b8290fcc5b3358477be44b3e/merged major:0 minor:624 fsType:overlay blockSize:0} overlay_0-631:{mountpoint:/var/lib/containers/storage/overlay/6b27cef8afb633890010d9ea1352a2653efebabfda092318dc71b87a8654444c/merged major:0 minor:631 fsType:overlay blockSize:0} overlay_0-635:{mountpoint:/var/lib/containers/storage/overlay/6c7a1d051ed3ca4f2ef5f246102e8742fbc4b69ac44e42ce4bc685a757590066/merged major:0 minor:635 fsType:overlay blockSize:0} overlay_0-637:{mountpoint:/var/lib/containers/storage/overlay/8d93e719a42d7971c77e7139fe6210970c1891f42cd2b9abde4100649cc5d6d4/merged major:0 minor:637 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/7cc03e876a53e63dd3a568ebda183183350cdf9352b863c69e6b154fde2d53d8/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-648:{mountpoint:/var/lib/containers/storage/overlay/07dcb1e29953f7a34430b820edec3ad065d176c94445bea1017d9f6c848705a6/merged major:0 minor:648 fsType:overlay blockSize:0} overlay_0-652:{mountpoint:/var/lib/containers/storage/overlay/3cd002df5ca290440b79e06c2a11fd25d23aa9d6536cb009c1f230be510ded13/merged major:0 minor:652 fsType:overlay blockSize:0} overlay_0-660:{mountpoint:/var/lib/containers/storage/overlay/43e6d92102d239c205b8243c70b0d44db6a189304423ffa711923f816274c2c4/merged major:0 minor:660 fsType:overlay blockSize:0} overlay_0-661:{mountpoint:/var/lib/containers/storage/overlay/fc561226c22951a9514f840fbf04beabf7465079db9a36bc4553a22401d35ac9/merged major:0 minor:661 fsType:overlay blockSize:0} overlay_0-664:{mountpoint:/var/lib/containers/storage/overlay/081398b1cd62133047c16e5d57b838b7261c65a593d3de9d59e7bf25b4b9a4bf/merged major:0 minor:664 fsType:overlay blockSize:0} overlay_0-665:{mountpoint:/var/lib/containers/storage/overlay/b58e35a212a8a0cc7f5b1b311c6e1902db63b0e6fafe902e5e7396c3fa547a40/merged major:0 minor:665 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/18326295ae742a6539b148298f45bb831a10e496ea052e83e30bc2098c8f3e67/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-688:{mountpoint:/var/lib/containers/storage/overlay/8721fc498e2a4dd2f798f33afe15be2510f11db5c7434cbec93dd19c1c41d8a4/merged major:0 minor:688 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/9ffc994b540ada0bc91bfdcda3ebd0d97a7ce48c9c2ea3614006be4f69a30a4d/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-690:{mountpoint:/var/lib/containers/storage/overlay/34a778d490757b546162c9e8544a779991ece4a71e555717ad0356d981db5a23/merged major:0 minor:690 fsType:overlay blockSize:0} overlay_0-692:{mountpoint:/var/lib/containers/storage/overlay/485e39229529f8abe4e86a659a542c741e982c3b4fa94564950ce59b19d29f14/merged major:0 minor:692 fsType:overlay blockSize:0} overlay_0-708:{mountpoint:/var/lib/containers/storage/overlay/26d91d1a9eb801eef3d3d72579881fe494f807f9bba4c9b3a973346a1f55547b/merged major:0 minor:708 fsType:overlay blockSize:0} overlay_0-710:{mountpoint:/var/lib/containers/storage/overlay/274ce20e150571f5aaae4564052f437f3e8147e9a7807d0fa3e5106639707b4e/merged major:0 minor:710 fsType:overlay blockSize:0} overlay_0-712:{mountpoint:/var/lib/containers/storage/overlay/f759eb9243c18e886eac1cf046ef7d46243707d507d8cfa88a25dd0d85319da8/merged major:0 minor:712 fsType:overlay blockSize:0} overlay_0-714:{mountpoint:/var/lib/containers/storage/overlay/7707f4206ec4f201367fee1b5a1264676a44a661ba28d96b03c690287a03910a/merged major:0 minor:714 fsType:overlay blockSize:0} overlay_0-716:{mountpoint:/var/lib/containers/storage/overlay/b8ee8cfc838ac06fd21bbe1c705cf2b6b9a65328fc9c1ecf036ca0ef8799df5f/merged major:0 minor:716 fsType:overlay blockSize:0} overlay_0-718:{mountpoint:/var/lib/containers/storage/overlay/4eaaee9e5a4b2290c8d12dee5f1f36b26f2f4d23b5dadea0cd1f718b0e0a8dd1/merged major:0 minor:718 fsType:overlay blockSize:0} overlay_0-719:{mountpoint:/var/lib/containers/storage/overlay/7db9b826025fc0c68aa2355e48841bbd547043dd23e3cefb0524a32a68d802a2/merged major:0 minor:719 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/22b114de9d7b7bf9f49024444f992b0fb875bfcaca5b94479b7f5aac4a13505c/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-726:{mountpoint:/var/lib/containers/storage/overlay/2f3fabd7fda553471338430f60cd05da23e011129d4976dd5565bf11fb9820ae/merged major:0 minor:726 fsType:overlay blockSize:0} overlay_0-728:{mountpoint:/var/lib/containers/storage/overlay/13f09d90dadbbf18bc611370389d188aa81df1aaec5e1fcca7a30dd9970b9087/merged major:0 minor:728 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/6ced38438deaff295891ad98e3ad36393963ef4ac9c25d55da189eec9e3e57e8/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-742:{mountpoint:/var/lib/containers/storage/overlay/acf8d6fe729cbde889fb1fcebe38eea039e0ef0bae53c4c0ff0071741f467091/merged major:0 minor:742 fsType:overlay blockSize:0} overlay_0-758:{mountpoint:/var/lib/containers/storage/overlay/09181af37c47547e49d63de0fe08f7ba8bc27337d37f4078f20e887c0173df24/merged major:0 minor:758 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/457fc4a2ce708609771e18248a9f2f7a7c178de4ffb82a0c9f3b304edf45be82/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-766:{mountpoint:/var/lib/containers/storage/overlay/4d910a0a70ae085420d822088c2dde1a8afd72fca118a65cca961ecccb63e53f/merged major:0 minor:766 fsType:overlay blockSize:0} overlay_0-768:{mountpoint:/var/lib/containers/storage/overlay/09f07b23da89bdd5fef9ad31ed8c86c4b55409690bc219f93157474ec63aad8b/merged major:0 minor:768 fsType:overlay blockSize:0} overlay_0-795:{mountpoint:/var/lib/containers/storage/overlay/ed5bcd958c0dcee9b753093e5ce312016bd57c648f096fc4f8e67fd9e00fbc8b/merged major:0 minor:795 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/0be21dd273aa034d6195b77b45ae6912d3227e5b9cca5f333dea2810b13ca2ae/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-805:{mountpoint:/var/lib/containers/storage/overlay/6b59f8aea5c388e24f291ccba3da340d40c68f1d8ea9d32f27df969b858d5ef9/merged major:0 minor:805 fsType:overlay blockSize:0} overlay_0-807:{mountpoint:/var/lib/containers/storage/overlay/40c87c110696aa0110255af9f58c3919635d9fdc79c4cbe88ca3c3aba2488f73/merged major:0 minor:807 fsType:overlay blockSize:0} overlay_0-808:{mountpoint:/var/lib/containers/storage/overlay/8691f9bf2b734e7a0296bd79c620cad07159e1ee7e3fd530e4ae33777b80394d/merged major:0 minor:808 fsType:overlay blockSize:0} overlay_0-811:{mountpoint:/var/lib/containers/storage/overlay/dd1ff355e8af31aac9442a9d11c25aed624296efc282a65a4a9a572eef389f6c/merged major:0 minor:811 fsType:overlay blockSize:0} overlay_0-814:{mountpoint:/var/lib/containers/storage/overlay/ddba903286c0c33d24da8550c34166f8045fdaf83c7dec35c1fdc8c8e870236b/merged major:0 minor:814 fsType:overlay blockSize:0} overlay_0-819:{mountpoint:/var/lib/containers/storage/overlay/a2b3792c6d9df0744826114015f931155724be9fa7f4494c3040bec650981f36/merged major:0 minor:819 fsType:overlay blockSize:0} overlay_0-821:{mountpoint:/var/lib/containers/storage/overlay/d352bdd1f922efdefb479cf0d60086e18bf4a1e452589d82f625d29e6851f1b1/merged major:0 minor:821 fsType:overlay blockSize:0} overlay_0-823:{mountpoint:/var/lib/containers/storage/overlay/25bcccdae1764516d6723fc01d3b698dd6d9a46ac77ea53e8c082bc9c327bd50/merged major:0 minor:823 fsType:overlay blockSize:0} overlay_0-830:{mountpoint:/var/lib/containers/storage/overlay/5673aff2e2d0545aaf461dc95afb93dad285f1d7b1aa7240a4a881d5b05fa8b9/merged major:0 minor:830 fsType:overlay blockSize:0} overlay_0-832:{mountpoint:/var/lib/containers/storage/overlay/1d8a7b75705f24a384b7d5866d3f23efba30777361929b1038aa7ed712cd4c83/merged major:0 minor:832 fsType:overlay blockSize:0} overlay_0-834:{mountpoint:/var/lib/containers/storage/overlay/306840f7033edce6471d4cdd3ed3139483f2b68bec462798f5c10c581e9e86c9/merged major:0 minor:834 fsType:overlay blockSize:0} overlay_0-836:{mountpoint:/var/lib/containers/storage/overlay/fbb6d4eb5251df09155427a71086655e0ad0a49ab8c0ac3a6cea07ccddc325af/merged major:0 minor:836 fsType:overlay blockSize:0} overlay_0-839:{mountpoint:/var/lib/containers/storage/overlay/d9f5dfc21ef6a1553d02b6272a7a2cec43ce01e583b2593b84088e70e82fc9fa/merged major:0 minor:839 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/9b6cefe4a05e3ed072e50ca9690fa76d80f41cb4aa1318deee4551d7d66fbf46/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-850:{mou Mar 13 12:53:46.550997 master-0 kubenswrapper[28149]: ntpoint:/var/lib/containers/storage/overlay/6346111aa5e6fd7a7d4806b9deb7950c199399d96a393df2d5f946f41cc3d1f5/merged major:0 minor:850 fsType:overlay blockSize:0} overlay_0-859:{mountpoint:/var/lib/containers/storage/overlay/34167ad5d57976e8b4dd273d5f92aad9ab059befb62d85bc64d8acaccc8e6d1b/merged major:0 minor:859 fsType:overlay blockSize:0} overlay_0-861:{mountpoint:/var/lib/containers/storage/overlay/e8104e466323bd3c5c6a58f5590f82f1790d04c0bf80655ec2928e5059bd3bf7/merged major:0 minor:861 fsType:overlay blockSize:0} overlay_0-864:{mountpoint:/var/lib/containers/storage/overlay/0dbea926daf7120955f38069fd50683d388de274da00b6d2dcfe348fad3d1264/merged major:0 minor:864 fsType:overlay blockSize:0} overlay_0-87:{mountpoint:/var/lib/containers/storage/overlay/a7ef996e0b7c8454c8dec830ba335d08c3a0d165f680b88e09ee40b0e74089c4/merged major:0 minor:87 fsType:overlay blockSize:0} overlay_0-870:{mountpoint:/var/lib/containers/storage/overlay/d0ab959d08ebb94d9b8ecd5c59261745bedb3c15c73bd22062a06b97af076c59/merged major:0 minor:870 fsType:overlay blockSize:0} overlay_0-876:{mountpoint:/var/lib/containers/storage/overlay/27597680c908b5047cdd060907c6553afc4dd5316237ee6961862c65e7d1da73/merged major:0 minor:876 fsType:overlay blockSize:0} overlay_0-878:{mountpoint:/var/lib/containers/storage/overlay/53c0c5b35fabdd46bb64a5e64a71c0bc0ee0fdde4ebc88e03535fd1db35691c5/merged major:0 minor:878 fsType:overlay blockSize:0} overlay_0-882:{mountpoint:/var/lib/containers/storage/overlay/6e372cabda3d1dad3729362b3ed8b9e11cf9872f850ad33a9f172d3e960ebd26/merged major:0 minor:882 fsType:overlay blockSize:0} overlay_0-892:{mountpoint:/var/lib/containers/storage/overlay/7de9d15fe4f607b3d54e8c1b3a573641a0e1d1ae9170de5907d00f0a98dc7f00/merged major:0 minor:892 fsType:overlay blockSize:0} overlay_0-896:{mountpoint:/var/lib/containers/storage/overlay/239389970f9f4146ecd9d7613bed3967f2eec6fd018eb8e723bc11e10cef618b/merged major:0 minor:896 fsType:overlay blockSize:0} overlay_0-906:{mountpoint:/var/lib/containers/storage/overlay/841ce25e3a38b350cd3e560e91c949ea522ee74736026f0c01b67281bdbf2e2e/merged major:0 minor:906 fsType:overlay blockSize:0} overlay_0-91:{mountpoint:/var/lib/containers/storage/overlay/77d47d5b3a313b4aed9b3c284f179ef150a081eaaa0dd3e815e78cc226e518d0/merged major:0 minor:91 fsType:overlay blockSize:0} overlay_0-910:{mountpoint:/var/lib/containers/storage/overlay/43c2c687bb3ab642e078faf08b8e71313a16196271975fdcd5e8b821278d6a66/merged major:0 minor:910 fsType:overlay blockSize:0} overlay_0-918:{mountpoint:/var/lib/containers/storage/overlay/777edb1aed7ac46ed7e0a39d76acf8b226cc828e887ce9b69b656d58cec58ac9/merged major:0 minor:918 fsType:overlay blockSize:0} overlay_0-919:{mountpoint:/var/lib/containers/storage/overlay/540741d9ac9977f7f072efb3bf50f1aee784fa309c480297f6054c67b3981788/merged major:0 minor:919 fsType:overlay blockSize:0} overlay_0-926:{mountpoint:/var/lib/containers/storage/overlay/649a39ab6600a85f2bc37fbb0351844ce7b59743b6c1c4cf6aa55c05ece2a3ba/merged major:0 minor:926 fsType:overlay blockSize:0} overlay_0-932:{mountpoint:/var/lib/containers/storage/overlay/9c8563e03d91b8bb889019ea8de226cfb5e858677e5c4951342167679134690e/merged major:0 minor:932 fsType:overlay blockSize:0} overlay_0-933:{mountpoint:/var/lib/containers/storage/overlay/2b99adca6d99b5237546ad30b010fcf2a15eed9b068bb337e3293a6a19470f0a/merged major:0 minor:933 fsType:overlay blockSize:0} overlay_0-95:{mountpoint:/var/lib/containers/storage/overlay/f997855bfb79ebc3619fae74b11063382326d56f2fd111d680702aff0e7deddd/merged major:0 minor:95 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/949c19dbc667b0cf672b4fd3e47a39e3b17fddf60507bcbb8acc539a27986f2b/merged major:0 minor:97 fsType:overlay blockSize:0} overlay_0-971:{mountpoint:/var/lib/containers/storage/overlay/f3bff14e63b28eb54c523482c43f6142c4df664b83ba7e6692ded5b663c859c3/merged major:0 minor:971 fsType:overlay blockSize:0} overlay_0-985:{mountpoint:/var/lib/containers/storage/overlay/c9238f14c769ad024b231ff71023f6602dd4fee242b5088f39fc96dd9a02469e/merged major:0 minor:985 fsType:overlay blockSize:0} overlay_0-989:{mountpoint:/var/lib/containers/storage/overlay/9fe9fa4b20edf7e1d300d73f22fde5d6c654c3425afa01456d0a1fbd93f88c5c/merged major:0 minor:989 fsType:overlay blockSize:0}] Mar 13 12:53:46.604849 master-0 kubenswrapper[28149]: I0313 12:53:46.601770 28149 manager.go:217] Machine: {Timestamp:2026-03-13 12:53:46.600296567 +0000 UTC m=+0.253761746 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:8daa6345b1f242d1bcc5f3b6bc2ba573 SystemUUID:8daa6345-b1f2-42d1-bcc5-f3b6bc2ba573 BootID:5a21c0be-2989-406d-99e7-723bbc7963b9 Filesystems:[{Device:overlay_0-91 DeviceMajor:0 DeviceMinor:91 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d44112d1-b2a5-4b8d-b74d-1e91638508d5/volumes/kubernetes.io~projected/kube-api-access-tdlrq DeviceMajor:0 DeviceMinor:763 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-823 DeviceMajor:0 DeviceMinor:823 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/36ad5a83-5c32-4941-94e0-7af86ac5d462/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:402 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ce3a655a-0684-4bc5-ac36-5878507537c7/volumes/kubernetes.io~projected/kube-api-access-vgbvr DeviceMajor:0 DeviceMinor:103 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f6992fed-b472-4a2d-a376-c5d72aa846d4/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:772 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/081a08d6-a4fd-412c-81c3-1364c36f0f15/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1039 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1107 DeviceMajor:0 DeviceMinor:1107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1cf388b6-e4a7-41db-a350-1b503214efd3/volumes/kubernetes.io~projected/kube-api-access-9kxx9 DeviceMajor:0 DeviceMinor:610 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fc192c03-5aec-4507-a702-56bf98c96e9c/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1140 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/18ffa620-dacc-4b09-be04-2c325f860813/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:679 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-692 DeviceMajor:0 DeviceMinor:692 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-795 DeviceMajor:0 DeviceMinor:795 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-55 DeviceMajor:0 DeviceMinor:55 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/00ebdf06-1f44-40cd-87e5-54195188b6d4/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:434 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-864 DeviceMajor:0 DeviceMinor:864 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-151 DeviceMajor:0 DeviceMinor:151 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/604456a0-4997-43bc-87ef-283a002111fe/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:441 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e536f971c5136f6b4bf02b1c06e15888a2ce0d84bff74c72b773c7dfe08129dc/userdata/shm DeviceMajor:0 DeviceMinor:461 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6b92704fbc97116df7b90609a695c48539a6c6401fd9288883ce4ea92059b841/userdata/shm DeviceMajor:0 DeviceMinor:801 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1016 DeviceMajor:0 DeviceMinor:1016 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1018 DeviceMajor:0 DeviceMinor:1018 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1153 DeviceMajor:0 DeviceMinor:1153 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c62b15f-001a-4b64-b85f-348aefde5d1b/volumes/kubernetes.io~projected/kube-api-access-8cf2v DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~projected/kube-api-access-clrz7 DeviceMajor:0 DeviceMinor:232 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-635 DeviceMajor:0 DeviceMinor:635 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-882 DeviceMajor:0 DeviceMinor:882 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a0efa1bf3eba5a2ca6c57d7440e21de8f77ce06cd058d6cbb24dd5784e78863f/userdata/shm DeviceMajor:0 DeviceMinor:622 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-821 DeviceMajor:0 DeviceMinor:821 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-859 DeviceMajor:0 DeviceMinor:859 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9f0c754e60ef175d41e372a61f68bf008bd4fa86f313ae1ab6dd7da87027e47f/userdata/shm DeviceMajor:0 DeviceMinor:1090 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-665 DeviceMajor:0 DeviceMinor:665 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-660 DeviceMajor:0 DeviceMinor:660 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1062 DeviceMajor:0 DeviceMinor:1062 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-690 DeviceMajor:0 DeviceMinor:690 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0da84bb7-e936-49a0-96b5-614a1305d6a4/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0da84bb7-e936-49a0-96b5-614a1305d6a4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa/volumes/kubernetes.io~projected/kube-api-access-vg8tz DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c7cc0f12daf98f8c149d5ab9799aa0a44614ca17d39dc2c0de31acb11cb8513a/userdata/shm DeviceMajor:0 DeviceMinor:458 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1066 DeviceMajor:0 DeviceMinor:1066 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-160 DeviceMajor:0 DeviceMinor:160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/887d261f-d07f-4ef0-a230-6568f47acf4d/volumes/kubernetes.io~projected/kube-api-access-pmfxj DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-556 DeviceMajor:0 DeviceMinor:556 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-714 DeviceMajor:0 DeviceMinor:714 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f0803181-4e37-43fa-8ddc-9c76d3f61817/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/00d8a21b-701c-4334-9dda-34c28b417f42/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:675 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ca4392c691682c0095dfe8e779e3de1082f741c49a5ae52776e0a4782a168b3b/userdata/shm DeviceMajor:0 DeviceMinor:1142 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-470 DeviceMajor:0 DeviceMinor:470 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1052 DeviceMajor:0 DeviceMinor:1052 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/13f32761-b386-4f93-b3c0-b16ea53d338a/volumes/kubernetes.io~projected/kube-api-access-m2p67 DeviceMajor:0 DeviceMinor:229 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/36730131d5c09051d26cf3e4a543df7abc5397cb1ce5ef8363c603313b0f97b0/userdata/shm DeviceMajor:0 DeviceMinor:769 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/970eeb7c4ac93691f1016454e092dba89eb2fcc2d1e0d15b1982b71ff313707c/userdata/shm DeviceMajor:0 DeviceMinor:489 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/77ef7e49-eb85-4f5e-94d3-a6a8619a6243/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f1cb9ab9a282ce90062e66d658d9cac8cb109a67f4786999b66ddea942eec412/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/50a2046b-092b-434c-92a2-579f4462c4fb/volumes/kubernetes.io~projected/kube-api-access-mnpds DeviceMajor:0 DeviceMinor:751 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1111 DeviceMajor:0 DeviceMinor:1111 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2b2ef2ddaedb81fecd10454e7de227fc33e0631466b7f1d7f0c388f2e1883f04/userdata/shm DeviceMajor:0 DeviceMinor:448 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1161 DeviceMajor:0 DeviceMinor:1161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-766 DeviceMajor:0 DeviceMinor:766 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2f48243b-6b05-4efa-8420-58a4419622bf/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:536 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-832 DeviceMajor:0 DeviceMinor:832 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1125 DeviceMajor:0 DeviceMinor:1125 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fc192c03-5aec-4507-a702-56bf98c96e9c/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1135 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5f581d90a0a82a94fc080eaf7d47e92e9bf51aec1be87f8c182f38bf6bb3aa3c/userdata/shm DeviceMajor:0 DeviceMinor:303 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1126 DeviceMajor:0 DeviceMinor:1126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1158 DeviceMajor:0 DeviceMinor:1158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-819 DeviceMajor:0 DeviceMinor:819 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3d653e1a-5903-4a02-9357-df145f028c0d/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:442 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8bb2d1af6db83f391d6e2aae6571d80b39fa6657f68665d4c9aa939bfcdacfe3/userdata/shm DeviceMajor:0 DeviceMinor:487 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/02065d5b43e51a34d865fcf740815dfc300cc50dd65b4465588c2f46e47c4755/userdata/shm DeviceMajor:0 DeviceMinor:799 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b6b12c0272b98e12411fc073869054a756107907b9e525ec9dbf8b8648e84805/userdata/shm DeviceMajor:0 DeviceMinor:237 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1014 DeviceMajor:0 DeviceMinor:1014 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-362 DeviceMajor:0 DeviceMinor:362 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-387 DeviceMajor:0 DeviceMinor:387 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a/volumes/kubernetes.io~projected/kube-api-access-m4tnq DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-595 DeviceMajor:0 DeviceMinor:595 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-710 DeviceMajor:0 DeviceMinor:710 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2f48243b-6b05-4efa-8420-58a4419622bf/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:535 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-896 DeviceMajor:0 DeviceMinor:896 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7066c2bb7f28cfd07ac1eb011cdc9849969ed5f37788da395910309c70481aa9/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/676b054a-e76f-425d-a6ff-3f1bea8b523e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:102 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-464 DeviceMajor:0 DeviceMinor:464 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-830 DeviceMajor:0 DeviceMinor:830 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1094 DeviceMajor:0 DeviceMinor:1094 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-41 DeviceMajor:0 DeviceMinor:41 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/915aabfe-1071-4bfc-b291-424304dfe7d8/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:429 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4100d060137e4638140caf3273251902712a7f8176df0de3da8bd3abf9194231/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3f65f8f162278830720a8d0df1f4af830419eb457612c65a706c42ccf3c12587/userdata/shm DeviceMajor:0 DeviceMinor:395 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-850 DeviceMajor:0 DeviceMinor:850 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-985 DeviceMajor:0 DeviceMinor:985 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6a42098e-4633-456f-ace7-bd3ee3bb6707/volumes/kubernetes.io~projected/kube-api-access-7mmbc DeviceMajor:0 DeviceMinor:1005 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-397 DeviceMajor:0 DeviceMinor:397 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e25bef76-7020-4f86-8dee-a58ebed537d2/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:941 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/81f8a7d8-b6a2-4522-91d3-bb524997ed0a/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:995 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/914d6236fd6885067cb3f7c4a3330427cd513d826dd28ffcdcc4fb60809af1e7/userdata/shm DeviceMajor:0 DeviceMinor:457 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95a0596df1becc3efa730840acdf49174a4f5a349b4eb826cfe7185b3ca3bcfa/userdata/shm DeviceMajor:0 DeviceMinor:611 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-892 DeviceMajor:0 DeviceMinor:892 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-932 DeviceMajor:0 DeviceMinor:932 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-573 DeviceMajor:0 DeviceMinor:573 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-521 DeviceMajor:0 DeviceMinor:521 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-933 DeviceMajor:0 DeviceMinor:933 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:474 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5e4f10ca-6466-4ac0-aeb7-325e40473e04/volumes/kubernetes.io~projected/kube-api-access-4xbrx DeviceMajor:0 DeviceMinor:1079 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d3d998ee-b26f-4e30-83bc-f94f8c68060a/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:450 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c4aea2db722bdac5b7168c49e752c46da9432061c6c515522534eb8c4d6126b5/userdata/shm DeviceMajor:0 DeviceMinor:574 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8c62b15f-001a-4b64-b85f-348aefde5d1b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-382 DeviceMajor:0 DeviceMinor:382 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c4477be6-bcff-407a-8033-b005e19bf5d6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:625 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-516 DeviceMajor:0 DeviceMinor:516 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2e65250ae5f98234b34351e57ed90215912c9eb2d91f1f748ce0046b50854a52/userdata/shm DeviceMajor:0 DeviceMinor:706 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-870 DeviceMajor:0 DeviceMinor:870 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5ae41cff-0949-47f8-aae9-ae133191476d/volumes/kubernetes.io~projected/kube-api-access-mlvjp DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/08e2bc8e-ca80-454c-81dc-211d122e32e0/volumes/kubernetes.io~projected/kube-api-access-xstz5 DeviceMajor:0 DeviceMinor:256 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:520 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/06b5a40ca00c0683426a1707f8de8aa68ed5666ea8cb726727703876312ec6d0/userdata/shm DeviceMajor:0 DeviceMinor:658 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/11e56b22c0ca61c66515f175bbe9f8fe67513a2c89d80968a1d368bbdad873da/userdata/shm DeviceMajor:0 DeviceMinor:1009 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-811 DeviceMajor:0 DeviceMinor:811 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-509 DeviceMajor:0 DeviceMinor:509 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0b83ebe9d6eac21a54c3830c4cd62ad02d28ed6f976f2ea34a3538e434b5beb0/userdata/shm DeviceMajor:0 DeviceMinor:439 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-339 DeviceMajor:0 DeviceMinor:339 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2f79578c-bbfb-4968-893a-730deb4c01f9/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:445 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-876 DeviceMajor:0 DeviceMinor:876 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/18ffa620-dacc-4b09-be04-2c325f860813/volumes/kubernetes.io~projected/kube-api-access-fmzhw DeviceMajor:0 DeviceMinor:680 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4641cab9868e3327d01299b932a32e6567401ef53f9b8cc74562f50d7b0926ca/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c642c18f-f960-4418-bcb7-df884f8f8ad5/volumes/kubernetes.io~projected/kube-api-access-8t2jl DeviceMajor:0 DeviceMinor:312 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ef42b65e-2d92-46ac-baaf-30e213787781/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:640 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-292 DeviceMajor:0 DeviceMinor:292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f0803181-4e37-43fa-8ddc-9c76d3f61817/volumes/kubernetes.io~projected/kube-api-access-lwkdj DeviceMajor:0 DeviceMinor:301 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/747659a6-4a1e-43ed-bb8e-36da6e63b5a1/volumes/kubernetes.io~projected/kube-api-access-qxcvd DeviceMajor:0 DeviceMinor:784 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-814 DeviceMajor:0 DeviceMinor:814 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-354 DeviceMajor:0 DeviceMinor:354 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/676b054a-e76f-425d-a6ff-3f1bea8b523e/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:446 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/317af639-269e-4163-8e24-fcea468b9352/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:771 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa/volumes/kubernetes.io~projected/kube-api-access-jbwwp DeviceMajor:0 DeviceMinor:857 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-552 DeviceMajor:0 DeviceMinor:552 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6096081d86dfbfa09ca1bdec91da24d4ddf5b823468c93d6e9e22822357294bc/userdata/shm DeviceMajor:0 DeviceMinor:858 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-162 DeviceMajor:0 DeviceMinor:162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/36ad5a83-5c32-4941-94e0-7af86ac5d462/volumes/kubernetes.io~projected/kube-api-access-mqsh5 DeviceMajor:0 DeviceMinor:563 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4dd0fc2f-f2ee-4447-a747-04a178288cf0/volumes/kubernetes.io~projected/kube-api-access-fnw9d DeviceMajor:0 DeviceMinor:104 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-708 DeviceMajor:0 DeviceMinor:708 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-484 DeviceMajor:0 DeviceMinor:484 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3f6f1ed4b9428b71641a87701412cc5bbb34559ce861fd12caebd021e4bfc58b/userdata/shm DeviceMajor:0 DeviceMinor:1048 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1120 DeviceMajor:0 DeviceMinor:1120 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-565 DeviceMajor:0 DeviceMinor:565 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1020 DeviceMajor:0 DeviceMinor:1020 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/29b6aa89-0416-4595-9deb-10b290521d86/volumes/kubernetes.io~projected/kube-api-access-cbtjs DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d7d67915-d31e-46dc-bb2e-1a6f689dd875/volumes/kubernetes.io~projected/kube-api-access-69hws DeviceMajor:0 DeviceMinor:764 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f6992fed-b472-4a2d-a376-c5d72aa846d4/volumes/kubernetes.io~projected/kube-api-access-4n75n DeviceMajor:0 DeviceMinor:777 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-989 DeviceMajor:0 DeviceMinor:989 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5c9d522ae739e2277c0296ac70334b7f1898acab312dd9c5c15576df36650d2b/userdata/shm DeviceMajor:0 DeviceMinor:628 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-267 DeviceMajor:0 DeviceMinor:267 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2f79578c-bbfb-4968-893a-730deb4c01f9/volumes/kubernetes.io~projected/kube-api-access-f9hks DeviceMajor:0 DeviceMinor:302 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/volumes/kubernetes.io~projected/kube-api-access-brzd4 DeviceMajor:0 DeviceMinor:138 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/be89c006-0c82-4728-9c79-210303e623dc/volumes/kubernetes.io~projected/kube-api-access-dd4m8 DeviceMajor:0 DeviceMinor:1059 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-95 DeviceMajor:0 DeviceMinor:95 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77ef7e49-eb85-4f5e-94d3-a6a8619a6243/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c4477be6-bcff-407a-8033-b005e19bf5d6/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:626 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1022 DeviceMajor:0 DeviceMinor:1022 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aac9e43b541ff8c2c2bfb86003c0c12881f81493b0818cd60c9ba62d916d93a2/userdata/shm DeviceMajor:0 DeviceMinor:84 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-87 DeviceMajor:0 DeviceMinor:87 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f4bdadfb01202ddc6464892800ff63c99a7021c118d9d6dada777648c97106ba/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1109 DeviceMajor:0 DeviceMinor:1109 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d211f6630b0e510a98b862295b3b4e01e3b8d0f319a2b5a7fbad71f4b348ebd3/userdata/shm DeviceMajor:0 DeviceMinor:1086 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1102 DeviceMajor:0 DeviceMinor:1102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/842bc57e6bbe56242bef7b88438357fe374fd511b54a67e77b67b5f32ad709e8/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1081e565-b7d8-4b6e-9d41-5db36cfe094c/volumes/kubernetes.io~projected/kube-api-access-b726x DeviceMajor:0 DeviceMinor:1081 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bcf05594-4c10-4b54-a47c-d55e323f1f87/volumes/kubernetes.io~projected/kube-api-access-j4hd6 DeviceMajor:0 DeviceMinor:298 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2f48243b-6b05-4efa-8420-58a4419622bf/volumes/kubernetes.io~projected/kube-api-access-qhddd DeviceMajor:0 DeviceMinor:540 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-719 DeviceMajor:0 DeviceMinor:719 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5392be7ab4e8fd67e380477649b224dee24aa1e239336e87f916d5fb0198c7d5/userdata/shm DeviceMajor:0 DeviceMinor:796 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-403 DeviceMajor:0 DeviceMinor:403 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1f8e6ca57afc2c7f1b75640b9d76490f87697f57e3507366ea9d48c029b1f4d6/userdata/shm DeviceMajor:0 DeviceMinor:242 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/269aedfd-4274-4998-bd0d-603b67257666/volumes/kubernetes.io~projected/kube-api-access-btf8q DeviceMajor:0 DeviceMinor:307 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-918 DeviceMajor:0 DeviceMinor:918 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1191 DeviceMajor:0 DeviceMinor:1191 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-273 DeviceMajor:0 DeviceMinor:273 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/45925a5e-41ae-4c19-b586-3151c7677612/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1000 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4f9e6618-62b5-4181-b545-211461811140/volumes/kubernetes.io~projected/kube-api-access-tr9gm DeviceMajor:0 DeviceMinor:453 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1f2ba041c75397f172b0e8393f3ba52da66efb5011242b7893cceb36ffb01a0a/userdata/shm DeviceMajor:0 DeviceMinor:410 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8/volumes/kubernetes.io~projected/kube-api-access-p6h9f DeviceMajor:0 DeviceMinor:511 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/317af639-269e-4163-8e24-fcea468b9352/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:776 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-534 DeviceMajor:0 DeviceMinor:534 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-377 DeviceMajor:0 DeviceMinor:377 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/915aabfe-1071-4bfc-b291-424304dfe7d8/volumes/kubernetes.io~projected/kube-api-access-n85n6 DeviceMajor:0 DeviceMinor:435 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d6d4028b66b05354ce39cae63e764e8ed5f2304a82f8cd6cbd59c6a8537a5bed/userdata/shm DeviceMajor:0 DeviceMinor:793 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3/volumes/kubernetes.io~projected/kube-api-access-zbk4f DeviceMajor:0 DeviceMinor:251 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-418 DeviceMajor:0 DeviceMinor:418 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-598 DeviceMajor:0 DeviceMinor:598 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d7d67915-d31e-46dc-bb2e-1a6f689dd875/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:753 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/842251bd-238a-44ba-99fc-a356503f5d16/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1084 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4923fdf0bf7675fa9b87a52fcb37d82a429121c63cdefd19c58f0e547211a622/userdata/shm DeviceMajor:0 DeviceMinor:686 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f5775266-5e58-44ed-81cb-dfe3faf38add/volumes/kubernetes.io~projected/kube-api-access-9q2qc DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-366 DeviceMajor:0 DeviceMinor:366 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-808 DeviceMajor:0 DeviceMinor:808 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-322 DeviceMajor:0 DeviceMinor:322 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5ae41cff-0949-47f8-aae9-ae133191476d/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-153 DeviceMajor:0 DeviceMinor:153 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:139 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1030 DeviceMajor:0 DeviceMinor:1030 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a4591749866252389a99d8d167ffc17036d5b09d044139535fc2027e3c84b038/userdata/shm DeviceMajor:0 DeviceMinor:333 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-728 DeviceMajor:0 DeviceMinor:728 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/45925a5e-41ae-4c19-b586-3151c7677612/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1002 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f7b194f18885cd869cc30349fb7d97bcdda7984dea9fb20d14a3e9436a39dc13/userdata/shm DeviceMajor:0 DeviceMinor:1006 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1185 DeviceMajor:0 DeviceMinor:1185 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/00ebdf06-1f44-40cd-87e5-54195188b6d4/volumes/kubernetes.io~projected/kube-api-access-7rkc4 DeviceMajor:0 DeviceMinor:436 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-518 DeviceMajor:0 DeviceMinor:518 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-624 DeviceMajor:0 DeviceMinor:624 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d5a19b80-d488-46d3-a4a8-0b80361077e1/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:472 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/50a2046b-092b-434c-92a2-579f4462c4fb/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:677 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1197 DeviceMajor:0 DeviceMinor:1197 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/754a980682251c2faf310af15f0042fda13df9ae03c81a3a698c0d687faffa20/userdata/shm DeviceMajor:0 DeviceMinor:257 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/747659a6-4a1e-43ed-bb8e-36da6e63b5a1/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:781 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-971 DeviceMajor:0 DeviceMinor:971 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1144 DeviceMajor:0 DeviceMinor:1144 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-906 DeviceMajor:0 DeviceMinor:906 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-330 DeviceMajor:0 DeviceMinor:330 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dc74469df6e780c8e9e2827ef289651444a1ff65c5b17d5937b4448f9addb191/userdata/shm DeviceMajor:0 DeviceMinor:1046 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d3d998ee-b26f-4e30-83bc-f94f8c68060a/volumes/kubernetes.io~projected/kube-api-access-x5nb7 DeviceMajor:0 DeviceMinor:294 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f31565e2-c211-4d28-8bbc-d7a951023a8b/volumes/kubernetes.io~projected/kube-api-access-kwk62 DeviceMajor:0 DeviceMinor:409 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e7d5143ee528d1b1b82a3ddf6b2e4a81cfc844b962f0b1dce63b2e1946f0f7b1/userdata/shm DeviceMajor:0 DeviceMinor:465 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c4477be6-bcff-407a-8033-b005e19bf5d6/volumes/kubernetes.io~projected/kube-api-access-d4q4x DeviceMajor:0 DeviceMinor:627 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/13710582-eac3-42e5-b28a-8b4fd3030af2/volumes/kubernetes.io~projected/kube-api-access-vpfv9 DeviceMajor:0 DeviceMinor:641 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5e4f10ca-6466-4ac0-aeb7-325e40473e04/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1077 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2a520ce1540e4505903e0c09b3c7ff382c5a6347945280110eeacb275245a884/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-310 DeviceMajor:0 DeviceMinor:310 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/081a08d6-a4fd-412c-81c3-1364c36f0f15/volumes/kubernetes.io~projected/kube-api-access-mz927 DeviceMajor:0 DeviceMinor:1047 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-550 DeviceMajor:0 DeviceMinor:550 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f2520a5a8a4d59a3a9c1df60e2638463688675ec7d03c44c89816280d167889/userdata/shm DeviceMajor:0 DeviceMinor:296 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ef42b65e-2d92-46ac-baaf-30e213787781/volumes/kubernetes.io~projected/kube-api-access-xxjbd DeviceMajor:0 DeviceMinor:630 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-365 DeviceMajor:0 DeviceMinor:365 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e2d9f98170b9be57120af2a3d4ad3e87888e64c3d58e7180a2211b7ab3fd61c6/userdata/shm DeviceMajor:0 DeviceMinor:154 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/45925a5e-41ae-4c19-b586-3151c7677612/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1001 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/259b8c4f70e310b1a2310215be2034d29d1f6b96a9b3aac30e2098e024daf661/userdata/shm DeviceMajor:0 DeviceMinor:1012 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b12a6f33-70df-4832-ac3b-0d2b94125fbf/volumes/kubernetes.io~projected/kube-api-access-9p9dz DeviceMajor:0 DeviceMinor:867 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/02db34ef289b2a257fb361c5e1190f74ebf2b35e8d2ff6177192f08616db19aa/userdata/shm DeviceMajor:0 DeviceMinor:681 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b046991449e1d420ea17d254f8c05faec355e4aacc147507b98a3f095fa7ff11/userdata/shm DeviceMajor:0 DeviceMinor:89 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aa04b90f16ed80e22ecfe4066cdbfb20ddc6e64977b5d63203a00d19ce4e1333/userdata/shm DeviceMajor:0 DeviceMinor:543 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f02f7e100e251060c54156f4f1beac07154b4cae59d3669639dcb3b98dca6124/userdata/shm DeviceMajor:0 DeviceMinor:868 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-631 DeviceMajor:0 DeviceMinor:631 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-661 DeviceMajor:0 DeviceMinor:661 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-290 DeviceMajor:0 DeviceMinor:290 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/31c103b44a6104346bc94bbde90d17a3c1f1dc78c81990683bc98b314baa42f3/userdata/shm DeviceMajor:0 DeviceMinor:650 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b12a6f33-70df-4832-ac3b-0d2b94125fbf/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:747 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/81f8a7d8-b6a2-4522-91d3-bb524997ed0a/volumes/kubernetes.io~projected/kube-api-access-gd6q6 DeviceMajor:0 DeviceMinor:1004 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/87a5904a-55ca-416f-8aec-57a2b5194c5a/volumes/kubernetes.io~projected/kube-api-access-mddhv DeviceMajor:0 DeviceMinor:757 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/173b3354a692a16e1dac4e0c613765bd4dc76c18f400e62b22fb91f5a2c1aaca/userdata/shm DeviceMajor:0 DeviceMinor:463 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-261 DeviceMajor:0 DeviceMinor:261 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-878 DeviceMajor:0 DeviceMinor:878 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1092 DeviceMajor:0 DeviceMinor:1092 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a454234a-6c8e-4916-81e8-c9e66cec9d31/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:443 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2bd86a5a786b8cd9854f1e649c41cebb309a3c1ac190ae67ed40c19b3eec0d04/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d5f63b6b-990a-444b-a954-d718036f2f6c/volumes/kubernetes.io~projected/kube-api-access-rw27v DeviceMajor:0 DeviceMinor:783 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1165 DeviceMajor:0 DeviceMinor:1165 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71/volumes/kubernetes.io~projected/kube-api-access-cscql DeviceMajor:0 DeviceMinor:649 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bcf05594-4c10-4b54-a47c-d55e323f1f87/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:449 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:852 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1027 DeviceMajor:0 DeviceMinor:1027 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-399 DeviceMajor:0 DeviceMinor:399 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-47 DeviceMajor:0 DeviceMinor:47 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15b592d6-3c48-45d4-9172-d28632ae8995/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/887d261f-d07f-4ef0-a230-6568f47acf4d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-589 DeviceMajor:0 DeviceMinor:589 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/327b75ff7d2f2b23c89b69896efc61025e5eb89aca44a3ec0a496ee1ba0617ea/userdata/shm DeviceMajor:0 DeviceMinor:1060 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1096 DeviceMajor:0 DeviceMinor:1096 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type: Mar 13 12:53:46.605464 master-0 kubenswrapper[28149]: vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e25bef76-7020-4f86-8dee-a58ebed537d2/volumes/kubernetes.io~projected/kube-api-access-r8gcb DeviceMajor:0 DeviceMinor:982 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/be89c006-0c82-4728-9c79-210303e623dc/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1054 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-49 DeviceMajor:0 DeviceMinor:49 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-807 DeviceMajor:0 DeviceMinor:807 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-637 DeviceMajor:0 DeviceMinor:637 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d44112d1-b2a5-4b8d-b74d-1e91638508d5/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:755 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-476 DeviceMajor:0 DeviceMinor:476 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5e4f10ca-6466-4ac0-aeb7-325e40473e04/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1085 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-718 DeviceMajor:0 DeviceMinor:718 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-910 DeviceMajor:0 DeviceMinor:910 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aff2f4bdb8410e55f89c70c290b0ee60c11f3e12de8945726a3ee53766f5711f/userdata/shm DeviceMajor:0 DeviceMinor:1082 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-360 DeviceMajor:0 DeviceMinor:360 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-576 DeviceMajor:0 DeviceMinor:576 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5e03538f7a196b4948a3a7782b34246a467d9e14e18b21bed24c1061ee7390ce/userdata/shm DeviceMajor:0 DeviceMinor:240 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c0f3e81c-f61d-430a-98e8-82e3b283fc73/volumes/kubernetes.io~projected/kube-api-access-65ts9 DeviceMajor:0 DeviceMinor:394 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-414 DeviceMajor:0 DeviceMinor:414 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4a6cc550d523ce1bfed748c19240f1c4e3a9202060aead91cc14af91ea48f5ce/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1146 DeviceMajor:0 DeviceMinor:1146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-482 DeviceMajor:0 DeviceMinor:482 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d5f63b6b-990a-444b-a954-d718036f2f6c/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:773 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/00d8a21b-701c-4334-9dda-34c28b417f42/volumes/kubernetes.io~projected/kube-api-access-bdxqb DeviceMajor:0 DeviceMinor:676 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/abc95f00c9e0c52ab8e7354cef7b322da886c1a2e03c03fc7c2109630be9ce0b/userdata/shm DeviceMajor:0 DeviceMinor:244 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c877a6a31ad16c9b3f6d1a10e247940a86d22f389ab82d4b655a52c5c8ebc0a4/userdata/shm DeviceMajor:0 DeviceMinor:803 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1081e565-b7d8-4b6e-9d41-5db36cfe094c/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1078 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-758 DeviceMajor:0 DeviceMinor:758 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4e279dcc-35e2-4503-babc-978ac208c150/volumes/kubernetes.io~projected/kube-api-access-bwjz5 DeviceMajor:0 DeviceMinor:246 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-588 DeviceMajor:0 DeviceMinor:588 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-107 DeviceMajor:0 DeviceMinor:107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ac4a42c40018650481568cd3e3f0125e785e9eec1d03bfa3009fd0ee7e80a629/userdata/shm DeviceMajor:0 DeviceMinor:308 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/69fae5f2ef7c0575f1ee9aa46fd22ae7b8ff711dadd59b1c832eda467b9991cd/userdata/shm DeviceMajor:0 DeviceMinor:437 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1050 DeviceMajor:0 DeviceMinor:1050 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/534692e5957aae2c3d6d9152a87bd37d178574b231da74f33889bcb3869aae82/userdata/shm DeviceMajor:0 DeviceMinor:105 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-485 DeviceMajor:0 DeviceMinor:485 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fc192c03-5aec-4507-a702-56bf98c96e9c/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1139 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-142 DeviceMajor:0 DeviceMinor:142 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5d54ffc470f89711bfd74406a6ddbacbe1dd4ef841888f957b998a6253057999/userdata/shm DeviceMajor:0 DeviceMinor:774 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f/volumes/kubernetes.io~projected/kube-api-access-mkvfp DeviceMajor:0 DeviceMinor:778 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4a2e539e0bcc34335d49c02d69347bd6d8232a1bb972540a7de9aececb6d671f/userdata/shm DeviceMajor:0 DeviceMinor:752 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-805 DeviceMajor:0 DeviceMinor:805 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-861 DeviceMajor:0 DeviceMinor:861 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a454234a-6c8e-4916-81e8-c9e66cec9d31/volumes/kubernetes.io~projected/kube-api-access-kn8f2 DeviceMajor:0 DeviceMinor:678 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/32fe77f9-082d-491c-b3d0-9c10feaf4a8e/volumes/kubernetes.io~projected/kube-api-access-6x492 DeviceMajor:0 DeviceMinor:657 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-426 DeviceMajor:0 DeviceMinor:426 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/29b6aa89-0416-4595-9deb-10b290521d86/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:473 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-318 DeviceMajor:0 DeviceMinor:318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4dd0fc2f-f2ee-4447-a747-04a178288cf0/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:98 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/caf607baad46071737a7ad295cff2dc8569126a9cada0edb3e0461efe66c6a52/userdata/shm DeviceMajor:0 DeviceMinor:639 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/842251bd-238a-44ba-99fc-a356503f5d16/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1076 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-601 DeviceMajor:0 DeviceMinor:601 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-165 DeviceMajor:0 DeviceMinor:165 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-664 DeviceMajor:0 DeviceMinor:664 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/152689b1-5875-4a9a-bb25-bee858523168/volumes/kubernetes.io~projected/kube-api-access-km69t DeviceMajor:0 DeviceMinor:115 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-264 DeviceMajor:0 DeviceMinor:264 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-478 DeviceMajor:0 DeviceMinor:478 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e631d83a1a86fd29ec9a08d7d593e19783f91c18b20dce846f07ab60e82a0c6e/userdata/shm DeviceMajor:0 DeviceMinor:983 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1088 DeviceMajor:0 DeviceMinor:1088 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3020d236-03e0-4916-97dd-f1085632ca43/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:447 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d39ee5d7-840e-4481-b0b9-baf34da2c7b1/volumes/kubernetes.io~projected/kube-api-access-rvrc7 DeviceMajor:0 DeviceMinor:750 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/081a08d6-a4fd-412c-81c3-1364c36f0f15/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1038 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bcf05594-4c10-4b54-a47c-d55e323f1f87/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:239 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/33eb1753d1610b81e5a24f93d9249c8e3e11614421397b68063a0f4b3b803691/userdata/shm DeviceMajor:0 DeviceMinor:787 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0d4645e0a294cbcc940fcfffa42d733be306f63d83bb6e85a675a05c4f244808/userdata/shm DeviceMajor:0 DeviceMinor:762 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1064 DeviceMajor:0 DeviceMinor:1064 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-768 DeviceMajor:0 DeviceMinor:768 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-620 DeviceMajor:0 DeviceMinor:620 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/034aaf8e-95df-4171-bae4-e7abe58d15f7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:289 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c2b846fb7ae8217762a980bc271d109131601f29417428a6bf3bd52ed70a5227/userdata/shm DeviceMajor:0 DeviceMinor:428 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/866b0545-e232-4c80-9fb6-549d313ac3fc/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:999 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/45925a5e-41ae-4c19-b586-3151c7677612/volumes/kubernetes.io~projected/kube-api-access-tll9d DeviceMajor:0 DeviceMinor:1003 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0efd5eb82a3bcc3e1df342102496e59fd5b2f395bc25671cea43a0422444ad1d/userdata/shm DeviceMajor:0 DeviceMinor:633 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2f79578c-bbfb-4968-893a-730deb4c01f9/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fc192c03-5aec-4507-a702-56bf98c96e9c/volumes/kubernetes.io~projected/kube-api-access-c69h2 DeviceMajor:0 DeviceMinor:1141 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-286 DeviceMajor:0 DeviceMinor:286 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-578 DeviceMajor:0 DeviceMinor:578 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3d653e1a-5903-4a02-9357-df145f028c0d/volumes/kubernetes.io~projected/kube-api-access-6x8kz DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d39ee5d7-840e-4481-b0b9-baf34da2c7b1/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:738 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c947bd9963641afb60859a3b7c244810b57b25926def17f475843b4b80fe1d04/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-353 DeviceMajor:0 DeviceMinor:353 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/87a5904a-55ca-416f-8aec-57a2b5194c5a/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:756 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-648 DeviceMajor:0 DeviceMinor:648 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-926 DeviceMajor:0 DeviceMinor:926 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:496 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-45 DeviceMajor:0 DeviceMinor:45 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c4477be6-bcff-407a-8033-b005e19bf5d6/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:523 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-111 DeviceMajor:0 DeviceMinor:111 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a13f1b34007cf32fe962f7d50d2988f0f66eb3022aee3b3a767d84bde6caed30/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-919 DeviceMajor:0 DeviceMinor:919 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-712 DeviceMajor:0 DeviceMinor:712 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/034aaf8e-95df-4171-bae4-e7abe58d15f7/volumes/kubernetes.io~projected/kube-api-access-5w5r2 DeviceMajor:0 DeviceMinor:295 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-716 DeviceMajor:0 DeviceMinor:716 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3020d236-03e0-4916-97dd-f1085632ca43/volumes/kubernetes.io~projected/kube-api-access-c24hd DeviceMajor:0 DeviceMinor:250 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-839 DeviceMajor:0 DeviceMinor:839 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/00ebdf06-1f44-40cd-87e5-54195188b6d4/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:433 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/13f32761-b386-4f93-b3c0-b16ea53d338a/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:451 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-834 DeviceMajor:0 DeviceMinor:834 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f5775266-5e58-44ed-81cb-dfe3faf38add/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2f48243b-6b05-4efa-8420-58a4419622bf/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:537 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-726 DeviceMajor:0 DeviceMinor:726 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1081e565-b7d8-4b6e-9d41-5db36cfe094c/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1072 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-374 DeviceMajor:0 DeviceMinor:374 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-593 DeviceMajor:0 DeviceMinor:593 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bad7583a8d87a54f610f7ff59977a30650055c862ace4c5e9beab2a18620861a/userdata/shm DeviceMajor:0 DeviceMinor:248 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b699b1831b9f5250a8ce5ada14edbc693482d02c81ce7cd3de76c7bdd381af20/userdata/shm DeviceMajor:0 DeviceMinor:791 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1159 DeviceMajor:0 DeviceMinor:1159 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/37d33fead87bedc9ebd143b0294923b633e8d9e7d47a848ec4d50fbd02e27628/userdata/shm DeviceMajor:0 DeviceMinor:475 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-742 DeviceMajor:0 DeviceMinor:742 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1156 DeviceMajor:0 DeviceMinor:1156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3020d236-03e0-4916-97dd-f1085632ca43/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:452 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-652 DeviceMajor:0 DeviceMinor:652 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d5a19b80-d488-46d3-a4a8-0b80361077e1/volumes/kubernetes.io~projected/kube-api-access-p8hcd DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/604456a0-4997-43bc-87ef-283a002111fe/volumes/kubernetes.io~projected/kube-api-access-8sk7j DeviceMajor:0 DeviceMinor:247 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c0f3e81c-f61d-430a-98e8-82e3b283fc73/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:393 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d841a86b661f54cc41ca6d7f060def7405c52e9adcc79d02bb6a1a6bb94e4f40/userdata/shm DeviceMajor:0 DeviceMinor:455 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f6992fed-b472-4a2d-a376-c5d72aa846d4/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:779 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a3ffdbf0e263655894f67c3d77b8923c8263311f04a159ccc83606c42c70fddb/userdata/shm DeviceMajor:0 DeviceMinor:1008 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/842251bd-238a-44ba-99fc-a356503f5d16/volumes/kubernetes.io~projected/kube-api-access-9v2jm DeviceMajor:0 DeviceMinor:1080 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f7395682c642b2e4f7ba2a9b79331d0b9afd8c7d7923a7bbdfc90aaeb45a6c2/userdata/shm DeviceMajor:0 DeviceMinor:109 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:780 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-836 DeviceMajor:0 DeviceMinor:836 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d6226325-c4d9-497e-8d19-a71adc66c5ac/volumes/kubernetes.io~projected/kube-api-access-4j5fc DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/317af639-269e-4163-8e24-fcea468b9352/volumes/kubernetes.io~projected/kube-api-access-4v66x DeviceMajor:0 DeviceMinor:782 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/be89c006-0c82-4728-9c79-210303e623dc/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1058 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-688 DeviceMajor:0 DeviceMinor:688 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-499 DeviceMajor:0 DeviceMinor:499 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-583 DeviceMajor:0 DeviceMinor:583 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-275 DeviceMajor:0 DeviceMinor:275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-269 DeviceMajor:0 DeviceMinor:269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-416 DeviceMajor:0 DeviceMinor:416 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:02065d5b43e51a3 MacAddress:66:6e:d0:e3:fb:6a Speed:10000 Mtu:8900} {Name:02db34ef289b2a2 MacAddress:d2:69:ef:04:61:8f Speed:10000 Mtu:8900} {Name:06b5a40ca00c068 MacAddress:2e:a5:92:fb:da:73 Speed:10000 Mtu:8900} {Name:0b83ebe9d6eac21 MacAddress:22:5e:55:b0:38:bf Speed:10000 Mtu:8900} {Name:11e56b22c0ca61c MacAddress:3e:f6:0e:38:41:5b Speed:10000 Mtu:8900} {Name:173b3354a692a16 MacAddress:6e:d1:06:66:38:5f Speed:10000 Mtu:8900} {Name:1f2ba041c75397f MacAddress:1e:bc:fc:d4:19:e3 Speed:10000 Mtu:8900} {Name:1f8e6ca57afc2c7 MacAddress:6a:e6:36:bd:b6:e5 Speed:10000 Mtu:8900} {Name:2e65250ae5f9823 MacAddress:3e:0b:87:93:9b:82 Speed:10000 Mtu:8900} {Name:31c103b44a61043 MacAddress:52:4f:1c:54:1a:33 Speed:10000 Mtu:8900} {Name:327b75ff7d2f2b2 MacAddress:fe:7e:be:bf:e5:0c Speed:10000 Mtu:8900} {Name:33eb1753d1610b8 MacAddress:12:ef:bd:ab:c3:60 Speed:10000 Mtu:8900} {Name:36730131d5c0905 MacAddress:5a:75:27:cb:21:c6 Speed:10000 Mtu:8900} {Name:37d33fead87bedc MacAddress:26:b6:68:d5:ef:db Speed:10000 Mtu:8900} {Name:3f65f8f16227883 MacAddress:3e:4c:e3:ef:fb:fd Speed:10000 Mtu:8900} {Name:4100d060137e463 MacAddress:2a:9f:c3:7e:23:30 Speed:10000 Mtu:8900} {Name:4923fdf0bf7675f MacAddress:f6:07:5a:43:98:a6 Speed:10000 Mtu:8900} {Name:4a2e539e0bcc343 MacAddress:86:29:86:d4:d9:23 Speed:10000 Mtu:8900} {Name:5392be7ab4e8fd6 MacAddress:f6:34:61:bb:fd:c9 Speed:10000 Mtu:8900} {Name:5c9d522ae739e22 MacAddress:7a:0c:86:a9:19:95 Speed:10000 Mtu:8900} {Name:5d54ffc470f8971 MacAddress:f2:68:53:2a:71:28 Speed:10000 Mtu:8900} {Name:5e03538f7a196b4 MacAddress:92:c4:5f:6f:78:e4 Speed:10000 Mtu:8900} {Name:5f581d90a0a82a9 MacAddress:8e:68:1e:e2:85:44 Speed:10000 Mtu:8900} {Name:69fae5f2ef7c057 MacAddress:7e:a4:85:02:e9:18 Speed:10000 Mtu:8900} {Name:6b92704fbc97116 MacAddress:76:10:8d:fc:1f:21 Speed:10000 Mtu:8900} {Name:754a980682251c2 MacAddress:7a:52:f5:8d:1b:8f Speed:10000 Mtu:8900} {Name:842bc57e6bbe562 MacAddress:32:a0:fa:9e:a7:0b Speed:10000 Mtu:8900} {Name:8bb2d1af6db83f3 MacAddress:ba:3a:3d:0c:04:22 Speed:10000 Mtu:8900} {Name:8f2520a5a8a4d59 MacAddress:46:80:2b:15:ab:26 Speed:10000 Mtu:8900} {Name:914d6236fd68850 MacAddress:16:51:de:f0:40:34 Speed:10000 Mtu:8900} {Name:95a0596df1becc3 MacAddress:3a:fe:76:d5:0a:41 Speed:10000 Mtu:8900} {Name:970eeb7c4ac9369 MacAddress:f2:25:88:6b:67:59 Speed:10000 Mtu:8900} {Name:9f0c754e60ef175 MacAddress:52:8a:b2:9e:c1:b6 Speed:10000 Mtu:8900} {Name:a0efa1bf3eba5a2 MacAddress:56:87:4e:e2:e3:18 Speed:10000 Mtu:8900} {Name:a3ffdbf0e263655 MacAddress:56:d3:f4:70:4a:10 Speed:10000 Mtu:8900} {Name:a45917498662523 MacAddress:de:9d:46:04:a4:64 Speed:10000 Mtu:8900} {Name:aa04b90f16ed80e MacAddress:aa:34:b1:fb:d4:2b Speed:10000 Mtu:8900} {Name:abc95f00c9e0c52 MacAddress:e6:20:8d:42:c0:49 Speed:10000 Mtu:8900} {Name:ac4a42c40018650 MacAddress:c2:4e:8e:f4:9f:4a Speed:10000 Mtu:8900} {Name:aff2f4bdb8410e5 MacAddress:32:19:25:d2:0d:4f Speed:10000 Mtu:8900} {Name:b699b1831b9f525 MacAddress:16:3b:54:11:38:5e Speed:10000 Mtu:8900} {Name:b6b12c0272b98e1 MacAddress:02:7b:5d:87:32:71 Speed:10000 Mtu:8900} {Name:bad7583a8d87a54 MacAddress:ae:80:27:37:da:25 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:b2:18:3f:c3:47:7c Speed:0 Mtu:8900} {Name:c2b846fb7ae8217 MacAddress:9a:02:13:b1:be:5a Speed:10000 Mtu:8900} {Name:c4aea2db722bdac MacAddress:86:2a:dc:c9:3d:a5 Speed:10000 Mtu:8900} {Name:c7cc0f12daf98f8 MacAddress:76:85:34:dd:e7:70 Speed:10000 Mtu:8900} {Name:c877a6a31ad16c9 MacAddress:3a:4a:d0:c8:7e:5a Speed:10000 Mtu:8900} {Name:c947bd9963641af MacAddress:22:e1:dc:59:f8:5c Speed:10000 Mtu:8900} {Name:ca4392c691682c0 MacAddress:36:9b:1b:e4:e9:6e Speed:10000 Mtu:8900} {Name:caf607baad46071 MacAddress:6e:2c:5d:0f:46:a3 Speed:10000 Mtu:8900} {Name:d6d4028b66b0535 MacAddress:d6:2f:28:4f:b9:83 Speed:10000 Mtu:8900} {Name:e536f971c5136f6 MacAddress:d6:06:2b:f2:9f:5a Speed:10000 Mtu:8900} {Name:e631d83a1a86fd2 MacAddress:0e:70:33:2f:43:e0 Speed:10000 Mtu:8900} {Name:e7d5143ee528d1b MacAddress:ca:b8:6a:32:5c:7f Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:68:13:a8 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:45:8d:c9 Speed:-1 Mtu:9000} {Name:f1cb9ab9a282ce9 MacAddress:3e:63:55:f4:78:6a Speed:10000 Mtu:8900} {Name:f7b194f18885cd8 MacAddress:0a:92:a1:38:73:95 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:0e:62:76:22:8d:d1 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 13 12:53:46.605464 master-0 kubenswrapper[28149]: I0313 12:53:46.604325 28149 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 13 12:53:46.605464 master-0 kubenswrapper[28149]: I0313 12:53:46.604399 28149 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 13 12:53:46.605464 master-0 kubenswrapper[28149]: I0313 12:53:46.604719 28149 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 13 12:53:46.605464 master-0 kubenswrapper[28149]: I0313 12:53:46.604871 28149 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 12:53:46.605464 master-0 kubenswrapper[28149]: I0313 12:53:46.604900 28149 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 12:53:46.605464 master-0 kubenswrapper[28149]: I0313 12:53:46.605199 28149 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 12:53:46.605464 master-0 kubenswrapper[28149]: I0313 12:53:46.605214 28149 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 12:53:46.605464 master-0 kubenswrapper[28149]: I0313 12:53:46.605223 28149 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 12:53:46.605464 master-0 kubenswrapper[28149]: I0313 12:53:46.605253 28149 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 12:53:46.605464 master-0 kubenswrapper[28149]: I0313 12:53:46.605293 28149 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:53:46.605464 master-0 kubenswrapper[28149]: I0313 12:53:46.605376 28149 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 13 12:53:46.605464 master-0 kubenswrapper[28149]: I0313 12:53:46.605439 28149 kubelet.go:418] "Attempting to sync node with API server" Mar 13 12:53:46.605464 master-0 kubenswrapper[28149]: I0313 12:53:46.605451 28149 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 12:53:46.605464 master-0 kubenswrapper[28149]: I0313 12:53:46.605468 28149 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 13 12:53:46.606433 master-0 kubenswrapper[28149]: I0313 12:53:46.605500 28149 kubelet.go:324] "Adding apiserver pod source" Mar 13 12:53:46.606433 master-0 kubenswrapper[28149]: I0313 12:53:46.605530 28149 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 12:53:46.611176 master-0 kubenswrapper[28149]: I0313 12:53:46.608283 28149 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 13 12:53:46.611176 master-0 kubenswrapper[28149]: I0313 12:53:46.608783 28149 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.611659 28149 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.611923 28149 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.611945 28149 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.611954 28149 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.611968 28149 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.611977 28149 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.611985 28149 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.611994 28149 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.612002 28149 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.612012 28149 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.612021 28149 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.612037 28149 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.612058 28149 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.612301 28149 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.612992 28149 server.go:1280] "Started kubelet" Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.613471 28149 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.613577 28149 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 13 12:53:46.614947 master-0 kubenswrapper[28149]: I0313 12:53:46.613864 28149 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 12:53:46.614057 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 13 12:53:46.646897 master-0 kubenswrapper[28149]: I0313 12:53:46.616530 28149 server.go:449] "Adding debug handlers to kubelet server" Mar 13 12:53:46.646897 master-0 kubenswrapper[28149]: I0313 12:53:46.617459 28149 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 12:53:46.646897 master-0 kubenswrapper[28149]: I0313 12:53:46.629564 28149 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 13 12:53:46.646897 master-0 kubenswrapper[28149]: I0313 12:53:46.629774 28149 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 12:26:40 +0000 UTC, rotation deadline is 2026-03-14 08:23:22.208834345 +0000 UTC Mar 13 12:53:46.646897 master-0 kubenswrapper[28149]: I0313 12:53:46.629801 28149 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h29m35.579035397s for next certificate rotation Mar 13 12:53:46.646897 master-0 kubenswrapper[28149]: I0313 12:53:46.629818 28149 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 12:53:46.646897 master-0 kubenswrapper[28149]: I0313 12:53:46.631069 28149 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 13 12:53:46.646897 master-0 kubenswrapper[28149]: I0313 12:53:46.631079 28149 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 13 12:53:46.646897 master-0 kubenswrapper[28149]: E0313 12:53:46.631214 28149 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:53:46.646897 master-0 kubenswrapper[28149]: I0313 12:53:46.635187 28149 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.658435 28149 factory.go:55] Registering systemd factory Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.658475 28149 factory.go:221] Registration of the systemd container factory successfully Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.661927 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0da84bb7-e936-49a0-96b5-614a1305d6a4" volumeName="kubernetes.io/secret/0da84bb7-e936-49a0-96b5-614a1305d6a4-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.661975 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="152689b1-5875-4a9a-bb25-bee858523168" volumeName="kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-sysctl-allowlist" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.661990 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="604456a0-4997-43bc-87ef-283a002111fe" volumeName="kubernetes.io/configmap/604456a0-4997-43bc-87ef-283a002111fe-telemetry-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662002 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="081a08d6-a4fd-412c-81c3-1364c36f0f15" volumeName="kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-node-bootstrap-token" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662014 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef42b65e-2d92-46ac-baaf-30e213787781" volumeName="kubernetes.io/projected/ef42b65e-2d92-46ac-baaf-30e213787781-kube-api-access-xxjbd" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662025 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="747659a6-4a1e-43ed-bb8e-36da6e63b5a1" volumeName="kubernetes.io/projected/747659a6-4a1e-43ed-bb8e-36da6e63b5a1-kube-api-access-qxcvd" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662036 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f83e0d3e-1f73-4727-8ee3-375cbb9e36f8" volumeName="kubernetes.io/empty-dir/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-tuned" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662046 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f43b4e7-5cd1-46d2-a02e-0d846b2e5182" volumeName="kubernetes.io/projected/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-kube-api-access-brzd4" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662060 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45925a5e-41ae-4c19-b586-3151c7677612" volumeName="kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-default-certificate" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662069 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae41cff-0949-47f8-aae9-ae133191476d" volumeName="kubernetes.io/secret/5ae41cff-0949-47f8-aae9-ae133191476d-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662078 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="842251bd-238a-44ba-99fc-a356503f5d16" volumeName="kubernetes.io/empty-dir/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-textfile" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662087 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c4477be6-bcff-407a-8033-b005e19bf5d6" volumeName="kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-etcd-serving-ca" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662096 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00d8a21b-701c-4334-9dda-34c28b417f42" volumeName="kubernetes.io/secret/00d8a21b-701c-4334-9dda-34c28b417f42-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662108 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef42b65e-2d92-46ac-baaf-30e213787781" volumeName="kubernetes.io/configmap/ef42b65e-2d92-46ac-baaf-30e213787781-config-volume" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662117 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00ebdf06-1f44-40cd-87e5-54195188b6d4" volumeName="kubernetes.io/projected/00ebdf06-1f44-40cd-87e5-54195188b6d4-kube-api-access-7rkc4" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662126 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5f63b6b-990a-444b-a954-d718036f2f6c" volumeName="kubernetes.io/configmap/d5f63b6b-990a-444b-a954-d718036f2f6c-images" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662161 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00d8a21b-701c-4334-9dda-34c28b417f42" volumeName="kubernetes.io/projected/00d8a21b-701c-4334-9dda-34c28b417f42-kube-api-access-bdxqb" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662171 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00ebdf06-1f44-40cd-87e5-54195188b6d4" volumeName="kubernetes.io/empty-dir/00ebdf06-1f44-40cd-87e5-54195188b6d4-cache" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662180 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="034aaf8e-95df-4171-bae4-e7abe58d15f7" volumeName="kubernetes.io/secret/034aaf8e-95df-4171-bae4-e7abe58d15f7-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662189 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a42098e-4633-456f-ace7-bd3ee3bb6707" volumeName="kubernetes.io/projected/6a42098e-4633-456f-ace7-bd3ee3bb6707-kube-api-access-7mmbc" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662197 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5775266-5e58-44ed-81cb-dfe3faf38add" volumeName="kubernetes.io/configmap/f5775266-5e58-44ed-81cb-dfe3faf38add-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662207 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6992fed-b472-4a2d-a376-c5d72aa846d4" volumeName="kubernetes.io/empty-dir/f6992fed-b472-4a2d-a376-c5d72aa846d4-tmpfs" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662216 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08e2bc8e-ca80-454c-81dc-211d122e32e0" volumeName="kubernetes.io/projected/08e2bc8e-ca80-454c-81dc-211d122e32e0-kube-api-access-xstz5" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662225 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32fe77f9-082d-491c-b3d0-9c10feaf4a8e" volumeName="kubernetes.io/projected/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-kube-api-access-6x492" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662234 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" volumeName="kubernetes.io/projected/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-kube-api-access-m4tnq" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662242 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c4477be6-bcff-407a-8033-b005e19bf5d6" volumeName="kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662255 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d47a1118-c12f-4234-8c0f-1a2a47fa8a4f" volumeName="kubernetes.io/secret/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-proxy-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662267 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="317af639-269e-4163-8e24-fcea468b9352" volumeName="kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662275 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="842251bd-238a-44ba-99fc-a356503f5d16" volumeName="kubernetes.io/projected/842251bd-238a-44ba-99fc-a356503f5d16-kube-api-access-9v2jm" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662284 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="915aabfe-1071-4bfc-b291-424304dfe7d8" volumeName="kubernetes.io/projected/915aabfe-1071-4bfc-b291-424304dfe7d8-kube-api-access-n85n6" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662292 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b592d6-3c48-45d4-9172-d28632ae8995" volumeName="kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662301 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3020d236-03e0-4916-97dd-f1085632ca43" volumeName="kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662310 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0f3e81c-f61d-430a-98e8-82e3b283fc73" volumeName="kubernetes.io/secret/c0f3e81c-f61d-430a-98e8-82e3b283fc73-signing-key" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662319 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c4477be6-bcff-407a-8033-b005e19bf5d6" volumeName="kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-encryption-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662329 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32fe77f9-082d-491c-b3d0-9c10feaf4a8e" volumeName="kubernetes.io/empty-dir/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-utilities" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662338 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c4477be6-bcff-407a-8033-b005e19bf5d6" volumeName="kubernetes.io/projected/c4477be6-bcff-407a-8033-b005e19bf5d6-kube-api-access-d4q4x" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662347 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" volumeName="kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-trusted-ca-bundle" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662356 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="034aaf8e-95df-4171-bae4-e7abe58d15f7" volumeName="kubernetes.io/projected/034aaf8e-95df-4171-bae4-e7abe58d15f7-kube-api-access-5w5r2" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662365 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="13f32761-b386-4f93-b3c0-b16ea53d338a" volumeName="kubernetes.io/projected/13f32761-b386-4f93-b3c0-b16ea53d338a-kube-api-access-m2p67" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662375 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d653e1a-5903-4a02-9357-df145f028c0d" volumeName="kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662385 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3d998ee-b26f-4e30-83bc-f94f8c68060a" volumeName="kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662393 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="089cfabc-9d3d-4260-bb16-8b5eaf73b3fa" volumeName="kubernetes.io/secret/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662403 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1081e565-b7d8-4b6e-9d41-5db36cfe094c" volumeName="kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662411 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="152689b1-5875-4a9a-bb25-bee858523168" volumeName="kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-binary-copy" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662421 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cf388b6-e4a7-41db-a350-1b503214efd3" volumeName="kubernetes.io/empty-dir/1cf388b6-e4a7-41db-a350-1b503214efd3-catalog-content" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662430 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6226325-c4d9-497e-8d19-a71adc66c5ac" volumeName="kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-script-lib" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662439 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00ebdf06-1f44-40cd-87e5-54195188b6d4" volumeName="kubernetes.io/secret/00ebdf06-1f44-40cd-87e5-54195188b6d4-catalogserver-certs" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662448 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b592d6-3c48-45d4-9172-d28632ae8995" volumeName="kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-ca" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662457 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd0fc2f-f2ee-4447-a747-04a178288cf0" volumeName="kubernetes.io/secret/4dd0fc2f-f2ee-4447-a747-04a178288cf0-metrics-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662465 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e4f10ca-6466-4ac0-aeb7-325e40473e04" volumeName="kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-metrics-client-ca" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662474 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e4f10ca-6466-4ac0-aeb7-325e40473e04" volumeName="kubernetes.io/empty-dir/5e4f10ca-6466-4ac0-aeb7-325e40473e04-volume-directive-shadow" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662483 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0ce4c51-2b9f-410f-93e5-9c2ff718dd71" volumeName="kubernetes.io/projected/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-kube-api-access-cscql" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662494 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7d67915-d31e-46dc-bb2e-1a6f689dd875" volumeName="kubernetes.io/projected/d7d67915-d31e-46dc-bb2e-1a6f689dd875-kube-api-access-69hws" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662504 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50be3c2b-284b-4f60-b4ed-2cc7b4e528fa" volumeName="kubernetes.io/secret/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-proxy-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662520 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c62b15f-001a-4b64-b85f-348aefde5d1b" volumeName="kubernetes.io/secret/8c62b15f-001a-4b64-b85f-348aefde5d1b-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662535 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29b6aa89-0416-4595-9deb-10b290521d86" volumeName="kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662551 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7d67915-d31e-46dc-bb2e-1a6f689dd875" volumeName="kubernetes.io/secret/d7d67915-d31e-46dc-bb2e-1a6f689dd875-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662561 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0da84bb7-e936-49a0-96b5-614a1305d6a4" volumeName="kubernetes.io/projected/0da84bb7-e936-49a0-96b5-614a1305d6a4-kube-api-access" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662574 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcf05594-4c10-4b54-a47c-d55e323f1f87" volumeName="kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-kube-api-access-j4hd6" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662585 28149 factory.go:153] Registering CRI-O factory Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662610 28149 factory.go:221] Registration of the crio container factory successfully Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662587 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1081e565-b7d8-4b6e-9d41-5db36cfe094c" volumeName="kubernetes.io/configmap/1081e565-b7d8-4b6e-9d41-5db36cfe094c-metrics-client-ca" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662699 28149 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662724 28149 factory.go:103] Registering Raw factory Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662745 28149 manager.go:1196] Started watching for new ooms in manager Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662757 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0803181-4e37-43fa-8ddc-9c76d3f61817" volumeName="kubernetes.io/secret/f0803181-4e37-43fa-8ddc-9c76d3f61817-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662798 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="842251bd-238a-44ba-99fc-a356503f5d16" volumeName="kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662810 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a454234a-6c8e-4916-81e8-c9e66cec9d31" volumeName="kubernetes.io/projected/a454234a-6c8e-4916-81e8-c9e66cec9d31-kube-api-access-kn8f2" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662822 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c4477be6-bcff-407a-8033-b005e19bf5d6" volumeName="kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-audit-policies" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662834 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="081a08d6-a4fd-412c-81c3-1364c36f0f15" volumeName="kubernetes.io/projected/081a08d6-a4fd-412c-81c3-1364c36f0f15-kube-api-access-mz927" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662844 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="10944f9c-8ce9-44e6-9c36-a0ea19d8cae3" volumeName="kubernetes.io/projected/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-kube-api-access-zbk4f" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662856 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f43b4e7-5cd1-46d2-a02e-0d846b2e5182" volumeName="kubernetes.io/secret/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-webhook-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662867 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50a2046b-092b-434c-92a2-579f4462c4fb" volumeName="kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-service-ca-bundle" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662879 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5f63b6b-990a-444b-a954-d718036f2f6c" volumeName="kubernetes.io/configmap/d5f63b6b-990a-444b-a954-d718036f2f6c-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662889 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48243b-6b05-4efa-8420-58a4419622bf" volumeName="kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662899 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f79578c-bbfb-4968-893a-730deb4c01f9" volumeName="kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662909 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5775266-5e58-44ed-81cb-dfe3faf38add" volumeName="kubernetes.io/secret/f5775266-5e58-44ed-81cb-dfe3faf38add-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662922 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc192c03-5aec-4507-a702-56bf98c96e9c" volumeName="kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-server-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662933 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="676b054a-e76f-425d-a6ff-3f1bea8b523e" volumeName="kubernetes.io/configmap/676b054a-e76f-425d-a6ff-3f1bea8b523e-service-ca" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662943 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d39ee5d7-840e-4481-b0b9-baf34da2c7b1" volumeName="kubernetes.io/secret/d39ee5d7-840e-4481-b0b9-baf34da2c7b1-samples-operator-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662952 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f79578c-bbfb-4968-893a-730deb4c01f9" volumeName="kubernetes.io/configmap/2f79578c-bbfb-4968-893a-730deb4c01f9-trusted-ca" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662963 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32fe77f9-082d-491c-b3d0-9c10feaf4a8e" volumeName="kubernetes.io/empty-dir/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-catalog-content" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662975 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0803181-4e37-43fa-8ddc-9c76d3f61817" volumeName="kubernetes.io/empty-dir/f0803181-4e37-43fa-8ddc-9c76d3f61817-available-featuregates" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662984 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6992fed-b472-4a2d-a376-c5d72aa846d4" volumeName="kubernetes.io/projected/f6992fed-b472-4a2d-a376-c5d72aa846d4-kube-api-access-4n75n" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.662995 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3020d236-03e0-4916-97dd-f1085632ca43" volumeName="kubernetes.io/projected/3020d236-03e0-4916-97dd-f1085632ca43-kube-api-access-c24hd" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663005 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e4f10ca-6466-4ac0-aeb7-325e40473e04" volumeName="kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663014 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b12a6f33-70df-4832-ac3b-0d2b94125fbf" volumeName="kubernetes.io/secret/b12a6f33-70df-4832-ac3b-0d2b94125fbf-machine-approver-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663023 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118" volumeName="kubernetes.io/projected/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-kube-api-access" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663033 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00ebdf06-1f44-40cd-87e5-54195188b6d4" volumeName="kubernetes.io/projected/00ebdf06-1f44-40cd-87e5-54195188b6d4-ca-certs" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663043 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45925a5e-41ae-4c19-b586-3151c7677612" volumeName="kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-metrics-certs" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663052 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81f8a7d8-b6a2-4522-91d3-bb524997ed0a" volumeName="kubernetes.io/projected/81f8a7d8-b6a2-4522-91d3-bb524997ed0a-kube-api-access-gd6q6" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663062 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3d998ee-b26f-4e30-83bc-f94f8c68060a" volumeName="kubernetes.io/projected/d3d998ee-b26f-4e30-83bc-f94f8c68060a-kube-api-access-x5nb7" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663072 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6226325-c4d9-497e-8d19-a71adc66c5ac" volumeName="kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663084 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc192c03-5aec-4507-a702-56bf98c96e9c" volumeName="kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-client-ca-bundle" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663094 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d47a1118-c12f-4234-8c0f-1a2a47fa8a4f" volumeName="kubernetes.io/projected/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-kube-api-access-mkvfp" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663104 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118" volumeName="kubernetes.io/secret/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663114 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00d8a21b-701c-4334-9dda-34c28b417f42" volumeName="kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-images" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663125 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18ffa620-dacc-4b09-be04-2c325f860813" volumeName="kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663224 28149 manager.go:319] Starting recovery of all containers Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663230 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f79578c-bbfb-4968-893a-730deb4c01f9" volumeName="kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-kube-api-access-f9hks" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663250 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36ad5a83-5c32-4941-94e0-7af86ac5d462" volumeName="kubernetes.io/projected/36ad5a83-5c32-4941-94e0-7af86ac5d462-kube-api-access-mqsh5" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663261 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce3a655a-0684-4bc5-ac36-5878507537c7" volumeName="kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-daemon-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663270 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce3a655a-0684-4bc5-ac36-5878507537c7" volumeName="kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-cni-binary-copy" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663280 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" volumeName="kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663290 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d39ee5d7-840e-4481-b0b9-baf34da2c7b1" volumeName="kubernetes.io/projected/d39ee5d7-840e-4481-b0b9-baf34da2c7b1-kube-api-access-rvrc7" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663301 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="081a08d6-a4fd-412c-81c3-1364c36f0f15" volumeName="kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-certs" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663310 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48243b-6b05-4efa-8420-58a4419622bf" volumeName="kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-trusted-ca-bundle" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663321 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48243b-6b05-4efa-8420-58a4419622bf" volumeName="kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-encryption-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663333 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="317af639-269e-4163-8e24-fcea468b9352" volumeName="kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663345 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50a2046b-092b-434c-92a2-579f4462c4fb" volumeName="kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-trusted-ca-bundle" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663369 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50a2046b-092b-434c-92a2-579f4462c4fb" volumeName="kubernetes.io/secret/50a2046b-092b-434c-92a2-579f4462c4fb-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663380 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0f3e81c-f61d-430a-98e8-82e3b283fc73" volumeName="kubernetes.io/configmap/c0f3e81c-f61d-430a-98e8-82e3b283fc73-signing-cabundle" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663393 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef42b65e-2d92-46ac-baaf-30e213787781" volumeName="kubernetes.io/secret/ef42b65e-2d92-46ac-baaf-30e213787781-metrics-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663404 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f9e6618-62b5-4181-b545-211461811140" volumeName="kubernetes.io/projected/4f9e6618-62b5-4181-b545-211461811140-kube-api-access-tr9gm" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663413 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="866b0545-e232-4c80-9fb6-549d313ac3fc" volumeName="kubernetes.io/secret/866b0545-e232-4c80-9fb6-549d313ac3fc-tls-certificates" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663432 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d44112d1-b2a5-4b8d-b74d-1e91638508d5" volumeName="kubernetes.io/projected/d44112d1-b2a5-4b8d-b74d-1e91638508d5-kube-api-access-tdlrq" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663443 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d47a1118-c12f-4234-8c0f-1a2a47fa8a4f" volumeName="kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-images" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663454 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5f63b6b-990a-444b-a954-d718036f2f6c" volumeName="kubernetes.io/secret/d5f63b6b-990a-444b-a954-d718036f2f6c-machine-api-operator-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663485 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18ffa620-dacc-4b09-be04-2c325f860813" volumeName="kubernetes.io/projected/18ffa620-dacc-4b09-be04-2c325f860813-kube-api-access-fmzhw" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663496 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48243b-6b05-4efa-8420-58a4419622bf" volumeName="kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-audit" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663506 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77ef7e49-eb85-4f5e-94d3-a6a8619a6243" volumeName="kubernetes.io/configmap/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663516 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="842251bd-238a-44ba-99fc-a356503f5d16" volumeName="kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663527 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c62b15f-001a-4b64-b85f-348aefde5d1b" volumeName="kubernetes.io/projected/8c62b15f-001a-4b64-b85f-348aefde5d1b-kube-api-access-8cf2v" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663537 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45925a5e-41ae-4c19-b586-3151c7677612" volumeName="kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-stats-auth" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663546 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87a5904a-55ca-416f-8aec-57a2b5194c5a" volumeName="kubernetes.io/configmap/87a5904a-55ca-416f-8aec-57a2b5194c5a-cco-trusted-ca" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663557 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5f63b6b-990a-444b-a954-d718036f2f6c" volumeName="kubernetes.io/projected/d5f63b6b-990a-444b-a954-d718036f2f6c-kube-api-access-rw27v" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663567 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00d8a21b-701c-4334-9dda-34c28b417f42" volumeName="kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-auth-proxy-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663577 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="152689b1-5875-4a9a-bb25-bee858523168" volumeName="kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-whereabouts-configmap" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663588 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f43b4e7-5cd1-46d2-a02e-0d846b2e5182" volumeName="kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-ovnkube-identity-cm" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663599 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd0fc2f-f2ee-4447-a747-04a178288cf0" volumeName="kubernetes.io/projected/4dd0fc2f-f2ee-4447-a747-04a178288cf0-kube-api-access-fnw9d" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663607 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a454234a-6c8e-4916-81e8-c9e66cec9d31" volumeName="kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663622 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="089cfabc-9d3d-4260-bb16-8b5eaf73b3fa" volumeName="kubernetes.io/projected/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-kube-api-access-vg8tz" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663631 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="604456a0-4997-43bc-87ef-283a002111fe" volumeName="kubernetes.io/projected/604456a0-4997-43bc-87ef-283a002111fe-kube-api-access-8sk7j" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663653 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87a5904a-55ca-416f-8aec-57a2b5194c5a" volumeName="kubernetes.io/secret/87a5904a-55ca-416f-8aec-57a2b5194c5a-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663663 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f9e6618-62b5-4181-b545-211461811140" volumeName="kubernetes.io/empty-dir/4f9e6618-62b5-4181-b545-211461811140-utilities" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663673 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc192c03-5aec-4507-a702-56bf98c96e9c" volumeName="kubernetes.io/projected/fc192c03-5aec-4507-a702-56bf98c96e9c-kube-api-access-c69h2" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663684 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1081e565-b7d8-4b6e-9d41-5db36cfe094c" volumeName="kubernetes.io/projected/1081e565-b7d8-4b6e-9d41-5db36cfe094c-kube-api-access-b726x" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663692 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6226325-c4d9-497e-8d19-a71adc66c5ac" volumeName="kubernetes.io/secret/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovn-node-metrics-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663702 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f31565e2-c211-4d28-8bbc-d7a951023a8b" volumeName="kubernetes.io/projected/f31565e2-c211-4d28-8bbc-d7a951023a8b-kube-api-access-kwk62" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663711 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08e2bc8e-ca80-454c-81dc-211d122e32e0" volumeName="kubernetes.io/configmap/08e2bc8e-ca80-454c-81dc-211d122e32e0-iptables-alerter-script" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663721 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0da84bb7-e936-49a0-96b5-614a1305d6a4" volumeName="kubernetes.io/configmap/0da84bb7-e936-49a0-96b5-614a1305d6a4-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663732 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="152689b1-5875-4a9a-bb25-bee858523168" volumeName="kubernetes.io/projected/152689b1-5875-4a9a-bb25-bee858523168-kube-api-access-km69t" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663741 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e4f10ca-6466-4ac0-aeb7-325e40473e04" volumeName="kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663751 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d261f-d07f-4ef0-a230-6568f47acf4d" volumeName="kubernetes.io/empty-dir/887d261f-d07f-4ef0-a230-6568f47acf4d-operand-assets" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663759 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d47a1118-c12f-4234-8c0f-1a2a47fa8a4f" volumeName="kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-auth-proxy-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663769 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6992fed-b472-4a2d-a376-c5d72aa846d4" volumeName="kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-webhook-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663778 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0803181-4e37-43fa-8ddc-9c76d3f61817" volumeName="kubernetes.io/projected/f0803181-4e37-43fa-8ddc-9c76d3f61817-kube-api-access-lwkdj" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663788 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b592d6-3c48-45d4-9172-d28632ae8995" volumeName="kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-service-ca" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663798 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f79578c-bbfb-4968-893a-730deb4c01f9" volumeName="kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-bound-sa-token" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663808 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="317af639-269e-4163-8e24-fcea468b9352" volumeName="kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663819 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36ad5a83-5c32-4941-94e0-7af86ac5d462" volumeName="kubernetes.io/secret/36ad5a83-5c32-4941-94e0-7af86ac5d462-webhook-certs" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663828 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="676b054a-e76f-425d-a6ff-3f1bea8b523e" volumeName="kubernetes.io/secret/676b054a-e76f-425d-a6ff-3f1bea8b523e-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663838 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="13710582-eac3-42e5-b28a-8b4fd3030af2" volumeName="kubernetes.io/projected/13710582-eac3-42e5-b28a-8b4fd3030af2-kube-api-access-vpfv9" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663847 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f83e0d3e-1f73-4727-8ee3-375cbb9e36f8" volumeName="kubernetes.io/empty-dir/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-tmp" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663867 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118" volumeName="kubernetes.io/configmap/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663883 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50a2046b-092b-434c-92a2-579f4462c4fb" volumeName="kubernetes.io/empty-dir/50a2046b-092b-434c-92a2-579f4462c4fb-snapshots" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663892 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="915aabfe-1071-4bfc-b291-424304dfe7d8" volumeName="kubernetes.io/projected/915aabfe-1071-4bfc-b291-424304dfe7d8-ca-certs" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663905 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="034aaf8e-95df-4171-bae4-e7abe58d15f7" volumeName="kubernetes.io/configmap/034aaf8e-95df-4171-bae4-e7abe58d15f7-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663915 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="089cfabc-9d3d-4260-bb16-8b5eaf73b3fa" volumeName="kubernetes.io/configmap/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663927 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b592d6-3c48-45d4-9172-d28632ae8995" volumeName="kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-client" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663948 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29b6aa89-0416-4595-9deb-10b290521d86" volumeName="kubernetes.io/projected/29b6aa89-0416-4595-9deb-10b290521d86-kube-api-access-cbtjs" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663967 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="676b054a-e76f-425d-a6ff-3f1bea8b523e" volumeName="kubernetes.io/projected/676b054a-e76f-425d-a6ff-3f1bea8b523e-kube-api-access" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663978 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="747659a6-4a1e-43ed-bb8e-36da6e63b5a1" volumeName="kubernetes.io/secret/747659a6-4a1e-43ed-bb8e-36da6e63b5a1-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.663997 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="842251bd-238a-44ba-99fc-a356503f5d16" volumeName="kubernetes.io/configmap/842251bd-238a-44ba-99fc-a356503f5d16-metrics-client-ca" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.664034 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be89c006-0c82-4728-9c79-210303e623dc" volumeName="kubernetes.io/projected/be89c006-0c82-4728-9c79-210303e623dc-kube-api-access-dd4m8" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.664047 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" volumeName="kubernetes.io/secret/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.664065 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc192c03-5aec-4507-a702-56bf98c96e9c" volumeName="kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-metrics-server-audit-profiles" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.664077 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b592d6-3c48-45d4-9172-d28632ae8995" volumeName="kubernetes.io/projected/15b592d6-3c48-45d4-9172-d28632ae8995-kube-api-access-clrz7" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.664088 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e4f10ca-6466-4ac0-aeb7-325e40473e04" volumeName="kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.664098 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81f8a7d8-b6a2-4522-91d3-bb524997ed0a" volumeName="kubernetes.io/secret/81f8a7d8-b6a2-4522-91d3-bb524997ed0a-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.664109 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d44112d1-b2a5-4b8d-b74d-1e91638508d5" volumeName="kubernetes.io/secret/d44112d1-b2a5-4b8d-b74d-1e91638508d5-cert" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.664120 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc192c03-5aec-4507-a702-56bf98c96e9c" volumeName="kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.664129 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3020d236-03e0-4916-97dd-f1085632ca43" volumeName="kubernetes.io/configmap/3020d236-03e0-4916-97dd-f1085632ca43-trusted-ca" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.664156 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="604456a0-4997-43bc-87ef-283a002111fe" volumeName="kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.664166 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c4477be6-bcff-407a-8033-b005e19bf5d6" volumeName="kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-etcd-client" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.664175 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d44112d1-b2a5-4b8d-b74d-1e91638508d5" volumeName="kubernetes.io/configmap/d44112d1-b2a5-4b8d-b74d-1e91638508d5-auth-proxy-config" seLinuxMountContext="" Mar 13 12:53:46.669186 master-0 kubenswrapper[28149]: I0313 12:53:46.664184 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="10944f9c-8ce9-44e6-9c36-a0ea19d8cae3" volumeName="kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664192 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="13f32761-b386-4f93-b3c0-b16ea53d338a" volumeName="kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664202 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="317af639-269e-4163-8e24-fcea468b9352" volumeName="kubernetes.io/projected/317af639-269e-4163-8e24-fcea468b9352-kube-api-access-4v66x" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664211 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e4f10ca-6466-4ac0-aeb7-325e40473e04" volumeName="kubernetes.io/projected/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-api-access-4xbrx" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664221 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be89c006-0c82-4728-9c79-210303e623dc" volumeName="kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-tls" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664230 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce3a655a-0684-4bc5-ac36-5878507537c7" volumeName="kubernetes.io/projected/ce3a655a-0684-4bc5-ac36-5878507537c7-kube-api-access-vgbvr" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664239 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18ffa620-dacc-4b09-be04-2c325f860813" volumeName="kubernetes.io/secret/18ffa620-dacc-4b09-be04-2c325f860813-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664248 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f43b4e7-5cd1-46d2-a02e-0d846b2e5182" volumeName="kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-env-overrides" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664257 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48243b-6b05-4efa-8420-58a4419622bf" volumeName="kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-config" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664267 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50a2046b-092b-434c-92a2-579f4462c4fb" volumeName="kubernetes.io/projected/50a2046b-092b-434c-92a2-579f4462c4fb-kube-api-access-mnpds" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664277 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae41cff-0949-47f8-aae9-ae133191476d" volumeName="kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-ovnkube-config" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664286 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0f3e81c-f61d-430a-98e8-82e3b283fc73" volumeName="kubernetes.io/projected/c0f3e81c-f61d-430a-98e8-82e3b283fc73-kube-api-access-65ts9" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664296 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e25bef76-7020-4f86-8dee-a58ebed537d2" volumeName="kubernetes.io/configmap/e25bef76-7020-4f86-8dee-a58ebed537d2-mcc-auth-proxy-config" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664308 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae41cff-0949-47f8-aae9-ae133191476d" volumeName="kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-env-overrides" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664317 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" volumeName="kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-service-ca-bundle" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664328 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48243b-6b05-4efa-8420-58a4419622bf" volumeName="kubernetes.io/projected/2f48243b-6b05-4efa-8420-58a4419622bf-kube-api-access-qhddd" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664337 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50be3c2b-284b-4f60-b4ed-2cc7b4e528fa" volumeName="kubernetes.io/projected/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-kube-api-access-jbwwp" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664349 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b12a6f33-70df-4832-ac3b-0d2b94125fbf" volumeName="kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-config" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664359 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcf05594-4c10-4b54-a47c-d55e323f1f87" volumeName="kubernetes.io/configmap/bcf05594-4c10-4b54-a47c-d55e323f1f87-trusted-ca" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664370 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcf05594-4c10-4b54-a47c-d55e323f1f87" volumeName="kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664379 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be89c006-0c82-4728-9c79-210303e623dc" volumeName="kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664389 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0ce4c51-2b9f-410f-93e5-9c2ff718dd71" volumeName="kubernetes.io/empty-dir/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-catalog-content" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664399 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e25bef76-7020-4f86-8dee-a58ebed537d2" volumeName="kubernetes.io/projected/e25bef76-7020-4f86-8dee-a58ebed537d2-kube-api-access-r8gcb" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664408 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="317af639-269e-4163-8e24-fcea468b9352" volumeName="kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-images" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664417 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d261f-d07f-4ef0-a230-6568f47acf4d" volumeName="kubernetes.io/projected/887d261f-d07f-4ef0-a230-6568f47acf4d-kube-api-access-pmfxj" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664428 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a454234a-6c8e-4916-81e8-c9e66cec9d31" volumeName="kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-proxy-ca-bundles" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664437 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b12a6f33-70df-4832-ac3b-0d2b94125fbf" volumeName="kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-auth-proxy-config" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664446 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be89c006-0c82-4728-9c79-210303e623dc" volumeName="kubernetes.io/configmap/be89c006-0c82-4728-9c79-210303e623dc-metrics-client-ca" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664455 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c62b15f-001a-4b64-b85f-348aefde5d1b" volumeName="kubernetes.io/configmap/8c62b15f-001a-4b64-b85f-348aefde5d1b-config" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664466 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b12a6f33-70df-4832-ac3b-0d2b94125fbf" volumeName="kubernetes.io/projected/b12a6f33-70df-4832-ac3b-0d2b94125fbf-kube-api-access-9p9dz" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664475 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3d998ee-b26f-4e30-83bc-f94f8c68060a" volumeName="kubernetes.io/configmap/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-trusted-ca" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664484 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5775266-5e58-44ed-81cb-dfe3faf38add" volumeName="kubernetes.io/projected/f5775266-5e58-44ed-81cb-dfe3faf38add-kube-api-access-9q2qc" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664493 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f83e0d3e-1f73-4727-8ee3-375cbb9e36f8" volumeName="kubernetes.io/projected/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-kube-api-access-p6h9f" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664502 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48243b-6b05-4efa-8420-58a4419622bf" volumeName="kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-etcd-client" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664511 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45925a5e-41ae-4c19-b586-3151c7677612" volumeName="kubernetes.io/projected/45925a5e-41ae-4c19-b586-3151c7677612-kube-api-access-tll9d" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664521 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77ef7e49-eb85-4f5e-94d3-a6a8619a6243" volumeName="kubernetes.io/projected/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-kube-api-access" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664531 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d261f-d07f-4ef0-a230-6568f47acf4d" volumeName="kubernetes.io/secret/887d261f-d07f-4ef0-a230-6568f47acf4d-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664541 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcf05594-4c10-4b54-a47c-d55e323f1f87" volumeName="kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-bound-sa-token" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664700 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5a19b80-d488-46d3-a4a8-0b80361077e1" volumeName="kubernetes.io/projected/d5a19b80-d488-46d3-a4a8-0b80361077e1-kube-api-access-p8hcd" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664735 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e25bef76-7020-4f86-8dee-a58ebed537d2" volumeName="kubernetes.io/secret/e25bef76-7020-4f86-8dee-a58ebed537d2-proxy-tls" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664746 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6992fed-b472-4a2d-a376-c5d72aa846d4" volumeName="kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-apiservice-cert" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664756 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cf388b6-e4a7-41db-a350-1b503214efd3" volumeName="kubernetes.io/projected/1cf388b6-e4a7-41db-a350-1b503214efd3-kube-api-access-9kxx9" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664767 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48243b-6b05-4efa-8420-58a4419622bf" volumeName="kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-etcd-serving-ca" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664778 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e279dcc-35e2-4503-babc-978ac208c150" volumeName="kubernetes.io/projected/4e279dcc-35e2-4503-babc-978ac208c150-kube-api-access-bwjz5" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664788 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87a5904a-55ca-416f-8aec-57a2b5194c5a" volumeName="kubernetes.io/projected/87a5904a-55ca-416f-8aec-57a2b5194c5a-kube-api-access-mddhv" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664800 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6226325-c4d9-497e-8d19-a71adc66c5ac" volumeName="kubernetes.io/projected/d6226325-c4d9-497e-8d19-a71adc66c5ac-kube-api-access-4j5fc" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664812 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc192c03-5aec-4507-a702-56bf98c96e9c" volumeName="kubernetes.io/empty-dir/fc192c03-5aec-4507-a702-56bf98c96e9c-audit-log" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664821 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d653e1a-5903-4a02-9357-df145f028c0d" volumeName="kubernetes.io/projected/3d653e1a-5903-4a02-9357-df145f028c0d-kube-api-access-6x8kz" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664830 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b592d6-3c48-45d4-9172-d28632ae8995" volumeName="kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-config" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664839 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3020d236-03e0-4916-97dd-f1085632ca43" volumeName="kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664847 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6226325-c4d9-497e-8d19-a71adc66c5ac" volumeName="kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-env-overrides" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664858 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="269aedfd-4274-4998-bd0d-603b67257666" volumeName="kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664867 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f9e6618-62b5-4181-b545-211461811140" volumeName="kubernetes.io/empty-dir/4f9e6618-62b5-4181-b545-211461811140-catalog-content" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664876 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77ef7e49-eb85-4f5e-94d3-a6a8619a6243" volumeName="kubernetes.io/secret/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664886 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="915aabfe-1071-4bfc-b291-424304dfe7d8" volumeName="kubernetes.io/empty-dir/915aabfe-1071-4bfc-b291-424304dfe7d8-cache" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664895 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48243b-6b05-4efa-8420-58a4419622bf" volumeName="kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-image-import-ca" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664907 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45925a5e-41ae-4c19-b586-3151c7677612" volumeName="kubernetes.io/configmap/45925a5e-41ae-4c19-b586-3151c7677612-service-ca-bundle" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664916 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a454234a-6c8e-4916-81e8-c9e66cec9d31" volumeName="kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-client-ca" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664927 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a454234a-6c8e-4916-81e8-c9e66cec9d31" volumeName="kubernetes.io/secret/a454234a-6c8e-4916-81e8-c9e66cec9d31-serving-cert" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664938 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c642c18f-f960-4418-bcb7-df884f8f8ad5" volumeName="kubernetes.io/projected/c642c18f-f960-4418-bcb7-df884f8f8ad5-kube-api-access-8t2jl" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664948 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18ffa620-dacc-4b09-be04-2c325f860813" volumeName="kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-client-ca" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664957 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cf388b6-e4a7-41db-a350-1b503214efd3" volumeName="kubernetes.io/empty-dir/1cf388b6-e4a7-41db-a350-1b503214efd3-utilities" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664966 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50be3c2b-284b-4f60-b4ed-2cc7b4e528fa" volumeName="kubernetes.io/configmap/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-mcd-auth-proxy-config" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664975 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5a19b80-d488-46d3-a4a8-0b80361077e1" volumeName="kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664985 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0ce4c51-2b9f-410f-93e5-9c2ff718dd71" volumeName="kubernetes.io/empty-dir/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-utilities" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.664993 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1081e565-b7d8-4b6e-9d41-5db36cfe094c" volumeName="kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-tls" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.665003 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae41cff-0949-47f8-aae9-ae133191476d" volumeName="kubernetes.io/projected/5ae41cff-0949-47f8-aae9-ae133191476d-kube-api-access-mlvjp" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.665017 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c4477be6-bcff-407a-8033-b005e19bf5d6" volumeName="kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-trusted-ca-bundle" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.665027 28149 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc192c03-5aec-4507-a702-56bf98c96e9c" volumeName="kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-client-certs" seLinuxMountContext="" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.665036 28149 reconstruct.go:97] "Volume reconstruction finished" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: I0313 12:53:46.665044 28149 reconciler.go:26] "Reconciler: start to sync state" Mar 13 12:53:46.683479 master-0 kubenswrapper[28149]: E0313 12:53:46.669902 28149 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 13 12:53:46.689571 master-0 kubenswrapper[28149]: I0313 12:53:46.684408 28149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 13 12:53:46.689571 master-0 kubenswrapper[28149]: I0313 12:53:46.685900 28149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 13 12:53:46.689571 master-0 kubenswrapper[28149]: I0313 12:53:46.685948 28149 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 13 12:53:46.689571 master-0 kubenswrapper[28149]: I0313 12:53:46.685975 28149 kubelet.go:2335] "Starting kubelet main sync loop" Mar 13 12:53:46.689571 master-0 kubenswrapper[28149]: E0313 12:53:46.686039 28149 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 12:53:46.708808 master-0 kubenswrapper[28149]: I0313 12:53:46.708548 28149 generic.go:334] "Generic (PLEG): container finished" podID="18ffa620-dacc-4b09-be04-2c325f860813" containerID="bf5764c3d8fba8c40cba1931dc4f8b36f32584d349bb0fa8f02b7c483a7626de" exitCode=0 Mar 13 12:53:46.711379 master-0 kubenswrapper[28149]: I0313 12:53:46.711335 28149 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="c01d9a99bd192d1dcec1d6b82d10b0a4d0e1e32477c6f2dee5d3e54b144ca2b7" exitCode=0 Mar 13 12:53:46.711379 master-0 kubenswrapper[28149]: I0313 12:53:46.711370 28149 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" exitCode=2 Mar 13 12:53:46.715717 master-0 kubenswrapper[28149]: I0313 12:53:46.715666 28149 generic.go:334] "Generic (PLEG): container finished" podID="d3d998ee-b26f-4e30-83bc-f94f8c68060a" containerID="de5f0e7cf4aa65e15644e5e3e9b797e70ca19a364211733911306a2f1e0bcffe" exitCode=0 Mar 13 12:53:46.719864 master-0 kubenswrapper[28149]: I0313 12:53:46.718750 28149 generic.go:334] "Generic (PLEG): container finished" podID="bc3825c8-8381-4d19-b482-e9499a72a700" containerID="36a99dc3a52618a9e4e7602094957952525bef75208a86d5faa34103a0a98d5e" exitCode=0 Mar 13 12:53:46.723043 master-0 kubenswrapper[28149]: I0313 12:53:46.722011 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-pjpn2_c642c18f-f960-4418-bcb7-df884f8f8ad5/snapshot-controller/4.log" Mar 13 12:53:46.723043 master-0 kubenswrapper[28149]: I0313 12:53:46.722047 28149 generic.go:334] "Generic (PLEG): container finished" podID="c642c18f-f960-4418-bcb7-df884f8f8ad5" containerID="c5dac29410c608c592ce2da4d646f5dae37752b356e4a615b5b9f8033e660a03" exitCode=1 Mar 13 12:53:46.725576 master-0 kubenswrapper[28149]: I0313 12:53:46.725380 28149 generic.go:334] "Generic (PLEG): container finished" podID="15b592d6-3c48-45d4-9172-d28632ae8995" containerID="5d11669c933e022e2eb1221b72c8dfc83094667fb6b7c0cba300ddb5b306a9d7" exitCode=0 Mar 13 12:53:46.731391 master-0 kubenswrapper[28149]: E0313 12:53:46.731341 28149 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:53:46.740853 master-0 kubenswrapper[28149]: I0313 12:53:46.740746 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_88bf0bf8-c0ee-454e-8d8b-592a6e796cfc/installer/0.log" Mar 13 12:53:46.740853 master-0 kubenswrapper[28149]: I0313 12:53:46.740827 28149 generic.go:334] "Generic (PLEG): container finished" podID="88bf0bf8-c0ee-454e-8d8b-592a6e796cfc" containerID="f528e329070374fe2c7b4c96e9e572f6132a46e0533c48dae8a60425fcb61903" exitCode=1 Mar 13 12:53:46.759036 master-0 kubenswrapper[28149]: I0313 12:53:46.754215 28149 generic.go:334] "Generic (PLEG): container finished" podID="842251bd-238a-44ba-99fc-a356503f5d16" containerID="255845d3d1399076602401b1b6c6d6b0266b45fda7e7b34498aafae3e13d0822" exitCode=0 Mar 13 12:53:46.759036 master-0 kubenswrapper[28149]: I0313 12:53:46.758930 28149 generic.go:334] "Generic (PLEG): container finished" podID="676b054a-e76f-425d-a6ff-3f1bea8b523e" containerID="01758a85bcc236e4926066681b9aa0286d195458c1cddadcb630f791db70a4ff" exitCode=0 Mar 13 12:53:46.762891 master-0 kubenswrapper[28149]: I0313 12:53:46.762853 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 12:53:46.764772 master-0 kubenswrapper[28149]: I0313 12:53:46.764690 28149 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="9c887f2b6cfcfcc1f3ea186daee81cbe3bce3c155cfd4e9bbac88f712c489339" exitCode=1 Mar 13 12:53:46.764772 master-0 kubenswrapper[28149]: I0313 12:53:46.764737 28149 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="d97124951202d97d2b090945a6d5c9c5add42850ba499052ed07d95631932324" exitCode=0 Mar 13 12:53:46.768523 master-0 kubenswrapper[28149]: I0313 12:53:46.768240 28149 generic.go:334] "Generic (PLEG): container finished" podID="e0ce4c51-2b9f-410f-93e5-9c2ff718dd71" containerID="dd56e097741179afb3ac4701cd79d5bbed72130ac8652ed79bed32f03419cdcf" exitCode=0 Mar 13 12:53:46.768523 master-0 kubenswrapper[28149]: I0313 12:53:46.768274 28149 generic.go:334] "Generic (PLEG): container finished" podID="e0ce4c51-2b9f-410f-93e5-9c2ff718dd71" containerID="c69c6a03bb52efddcf3f1318571834c27a8923b0db98ff09b8b80e6975cede5a" exitCode=0 Mar 13 12:53:46.795185 master-0 kubenswrapper[28149]: I0313 12:53:46.787426 28149 generic.go:334] "Generic (PLEG): container finished" podID="034aaf8e-95df-4171-bae4-e7abe58d15f7" containerID="6a3d66ed3fc6a1fb717a2b2977fa5c6231d315f07c1d90d364eea56e7a5d7c86" exitCode=0 Mar 13 12:53:46.795185 master-0 kubenswrapper[28149]: E0313 12:53:46.787509 28149 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 12:53:46.799598 master-0 kubenswrapper[28149]: I0313 12:53:46.799544 28149 generic.go:334] "Generic (PLEG): container finished" podID="4e279dcc-35e2-4503-babc-978ac208c150" containerID="6d3a11a8a9fe0d5dca51d9ed392850f6788ebc18ced1ae2a2591ab3c73418318" exitCode=0 Mar 13 12:53:46.806007 master-0 kubenswrapper[28149]: I0313 12:53:46.805959 28149 generic.go:334] "Generic (PLEG): container finished" podID="2f48243b-6b05-4efa-8420-58a4419622bf" containerID="98d5a0f3b11d1d941da412c009ee3c69f4e8ca0aa4267f8ef0b2168cee85df9d" exitCode=0 Mar 13 12:53:46.809346 master-0 kubenswrapper[28149]: I0313 12:53:46.809308 28149 generic.go:334] "Generic (PLEG): container finished" podID="741a6830aaef63e92194dd05d0b4da3d" containerID="1b406ee46971e490792a19b63a98c585c578548f473b720d5b7cd5c729eda7ae" exitCode=0 Mar 13 12:53:46.809346 master-0 kubenswrapper[28149]: I0313 12:53:46.809341 28149 generic.go:334] "Generic (PLEG): container finished" podID="741a6830aaef63e92194dd05d0b4da3d" containerID="ad6b6be249a4b35bc319cc0c698c9b937c8df08adaedc5da969d7d3c63154f97" exitCode=2 Mar 13 12:53:46.809449 master-0 kubenswrapper[28149]: I0313 12:53:46.809350 28149 generic.go:334] "Generic (PLEG): container finished" podID="741a6830aaef63e92194dd05d0b4da3d" containerID="52372f90f3e518110cf1e64b9ff43ecce31d8c11b62d3766c284ad38e957707b" exitCode=0 Mar 13 12:53:46.809449 master-0 kubenswrapper[28149]: I0313 12:53:46.809358 28149 generic.go:334] "Generic (PLEG): container finished" podID="741a6830aaef63e92194dd05d0b4da3d" containerID="45b191ee613240af89dae5f40970afaf7896448c3e2a3a3165bd85645b5d7288" exitCode=0 Mar 13 12:53:46.817163 master-0 kubenswrapper[28149]: I0313 12:53:46.817100 28149 generic.go:334] "Generic (PLEG): container finished" podID="76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa" containerID="ac5bd7e9e9ade8981025308aaf718e0c330dc4308320062f39375e8cc91f1134" exitCode=0 Mar 13 12:53:46.818922 master-0 kubenswrapper[28149]: I0313 12:53:46.818887 28149 generic.go:334] "Generic (PLEG): container finished" podID="45925a5e-41ae-4c19-b586-3151c7677612" containerID="f4c4c4e5602a184f824d2367e7178507d9196d2b340284307f9055d03b447109" exitCode=0 Mar 13 12:53:46.820510 master-0 kubenswrapper[28149]: I0313 12:53:46.820477 28149 generic.go:334] "Generic (PLEG): container finished" podID="d7d67915-d31e-46dc-bb2e-1a6f689dd875" containerID="39a04612253a7a25dd9ded024c4c70cc0d933a3064b287c0c85c828db13d75e3" exitCode=0 Mar 13 12:53:46.823207 master-0 kubenswrapper[28149]: I0313 12:53:46.823179 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/4.log" Mar 13 12:53:46.823599 master-0 kubenswrapper[28149]: I0313 12:53:46.823516 28149 generic.go:334] "Generic (PLEG): container finished" podID="2f79578c-bbfb-4968-893a-730deb4c01f9" containerID="25a4898dab96b21910d2f9f74a6d0f38ac67afd0471454539094f0cdc130c4f5" exitCode=1 Mar 13 12:53:46.830791 master-0 kubenswrapper[28149]: I0313 12:53:46.830728 28149 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="1e34a2d26492b3df232459c166da8fc0ebb8dbb2c47bdf38857a1fe49a541e66" exitCode=0 Mar 13 12:53:46.830791 master-0 kubenswrapper[28149]: I0313 12:53:46.830769 28149 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="e1467141e26d577aa41ff200895deb27986a626bccdf77e649db90ad9f882528" exitCode=0 Mar 13 12:53:46.830791 master-0 kubenswrapper[28149]: I0313 12:53:46.830782 28149 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="7c57d841a99a5e2cd1a42f48f3248a346104a0d155b92d640bd1a07ffd81b262" exitCode=0 Mar 13 12:53:46.830791 master-0 kubenswrapper[28149]: I0313 12:53:46.830793 28149 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="ec83ba0b787947b6a285aac754b05fb294210ab326a2dc10a91b47f74ad8a542" exitCode=0 Mar 13 12:53:46.830791 master-0 kubenswrapper[28149]: I0313 12:53:46.830802 28149 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="134471a7b38bb354ac04a0f22e311d7bea5264435a237eafabc1ded333b762d2" exitCode=0 Mar 13 12:53:46.831195 master-0 kubenswrapper[28149]: I0313 12:53:46.830809 28149 generic.go:334] "Generic (PLEG): container finished" podID="152689b1-5875-4a9a-bb25-bee858523168" containerID="ae6f8708327259b51cf004983ebe879d244aef1bf9515e029c5674f436c5c187" exitCode=0 Mar 13 12:53:46.832413 master-0 kubenswrapper[28149]: E0313 12:53:46.831889 28149 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:53:46.833298 master-0 kubenswrapper[28149]: I0313 12:53:46.833267 28149 generic.go:334] "Generic (PLEG): container finished" podID="e25bef76-7020-4f86-8dee-a58ebed537d2" containerID="fefc52314f557d7c60fa165574ebac10c9ccc912b863ad03ae108b2ab17e6e90" exitCode=0 Mar 13 12:53:46.835469 master-0 kubenswrapper[28149]: I0313 12:53:46.835425 28149 generic.go:334] "Generic (PLEG): container finished" podID="d47a1118-c12f-4234-8c0f-1a2a47fa8a4f" containerID="f651f87ff531c82cf300379fcb01d86f8ea9306940ee3ed2300a4c0ed8856e65" exitCode=0 Mar 13 12:53:46.846020 master-0 kubenswrapper[28149]: I0313 12:53:46.845968 28149 generic.go:334] "Generic (PLEG): container finished" podID="bcf05594-4c10-4b54-a47c-d55e323f1f87" containerID="f4a916875b5dd7f287df508905d5d99ad3dbd91629a2c95a805f4ab66aa7996e" exitCode=0 Mar 13 12:53:46.852166 master-0 kubenswrapper[28149]: I0313 12:53:46.852097 28149 generic.go:334] "Generic (PLEG): container finished" podID="77ef7e49-eb85-4f5e-94d3-a6a8619a6243" containerID="3add725e66228351c75651bb4a7357a39de488d2f8d517621841a317712aba3a" exitCode=0 Mar 13 12:53:46.855246 master-0 kubenswrapper[28149]: I0313 12:53:46.855219 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-qg8q5_1f43b4e7-5cd1-46d2-a02e-0d846b2e5182/approver/1.log" Mar 13 12:53:46.855914 master-0 kubenswrapper[28149]: I0313 12:53:46.855863 28149 generic.go:334] "Generic (PLEG): container finished" podID="1f43b4e7-5cd1-46d2-a02e-0d846b2e5182" containerID="b91c079b382f32d02d029d00309dfc5b4425807a136542a6d176792b503d743b" exitCode=1 Mar 13 12:53:46.861243 master-0 kubenswrapper[28149]: I0313 12:53:46.861211 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_3828446d-a3e3-412f-a0e7-7347b5de523a/installer/0.log" Mar 13 12:53:46.861409 master-0 kubenswrapper[28149]: I0313 12:53:46.861257 28149 generic.go:334] "Generic (PLEG): container finished" podID="3828446d-a3e3-412f-a0e7-7347b5de523a" containerID="aa6c714b8707274c998afed0944ecc8600d7bc24a6b08415b6cbef112b436b47" exitCode=1 Mar 13 12:53:46.863671 master-0 kubenswrapper[28149]: I0313 12:53:46.863649 28149 generic.go:334] "Generic (PLEG): container finished" podID="1cf388b6-e4a7-41db-a350-1b503214efd3" containerID="ad67ae9abd7e29e1c8108cc236bfa4a285963e407827b35369107a92e21b73f3" exitCode=0 Mar 13 12:53:46.863671 master-0 kubenswrapper[28149]: I0313 12:53:46.863667 28149 generic.go:334] "Generic (PLEG): container finished" podID="1cf388b6-e4a7-41db-a350-1b503214efd3" containerID="2588acad0fdaa8971f9072ba2c71ab6cb4dcef118394ee3f0eafb7916282bbdf" exitCode=0 Mar 13 12:53:46.866214 master-0 kubenswrapper[28149]: I0313 12:53:46.866186 28149 generic.go:334] "Generic (PLEG): container finished" podID="32fe77f9-082d-491c-b3d0-9c10feaf4a8e" containerID="9868ebc7add2931fb8b9f0e690fb3b5b7d50ca28093f5dd4662eaa27a2ef163c" exitCode=0 Mar 13 12:53:46.866214 master-0 kubenswrapper[28149]: I0313 12:53:46.866213 28149 generic.go:334] "Generic (PLEG): container finished" podID="32fe77f9-082d-491c-b3d0-9c10feaf4a8e" containerID="1ba7fe014f4219ce7bf848e51ed5c249f92fdeb9d65b7c7dc9ad928634e63414" exitCode=0 Mar 13 12:53:46.867711 master-0 kubenswrapper[28149]: I0313 12:53:46.867693 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-btz8w_747659a6-4a1e-43ed-bb8e-36da6e63b5a1/control-plane-machine-set-operator/0.log" Mar 13 12:53:46.867774 master-0 kubenswrapper[28149]: I0313 12:53:46.867723 28149 generic.go:334] "Generic (PLEG): container finished" podID="747659a6-4a1e-43ed-bb8e-36da6e63b5a1" containerID="fa510582aea2f9e7beb06130b537cab1524760c3e6ed427ab1be5150bea793b0" exitCode=1 Mar 13 12:53:46.873685 master-0 kubenswrapper[28149]: I0313 12:53:46.873633 28149 generic.go:334] "Generic (PLEG): container finished" podID="c0f3e81c-f61d-430a-98e8-82e3b283fc73" containerID="4db2bc5c40e8683ca741e5bf890d717d8c9fa9c48b7ac41671352e56a94462da" exitCode=0 Mar 13 12:53:46.876679 master-0 kubenswrapper[28149]: I0313 12:53:46.876636 28149 generic.go:334] "Generic (PLEG): container finished" podID="f5775266-5e58-44ed-81cb-dfe3faf38add" containerID="e24974d7562637f30c354afb27ef4179bd234226ab89ce7552570f69e7ee23e6" exitCode=0 Mar 13 12:53:46.878788 master-0 kubenswrapper[28149]: I0313 12:53:46.878762 28149 generic.go:334] "Generic (PLEG): container finished" podID="089cfabc-9d3d-4260-bb16-8b5eaf73b3fa" containerID="13abf0479b13298ab465c691e26a5f91f167723c1dfd38a5ddfba43b7407cce4" exitCode=0 Mar 13 12:53:46.885389 master-0 kubenswrapper[28149]: I0313 12:53:46.885314 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-8fjzg_00ebdf06-1f44-40cd-87e5-54195188b6d4/manager/0.log" Mar 13 12:53:46.885981 master-0 kubenswrapper[28149]: I0313 12:53:46.885911 28149 generic.go:334] "Generic (PLEG): container finished" podID="00ebdf06-1f44-40cd-87e5-54195188b6d4" containerID="d48ca44a10dd4d84fe59c37cb0e8c494fdafd60a7b5212ea552414db0868ae46" exitCode=1 Mar 13 12:53:46.891365 master-0 kubenswrapper[28149]: I0313 12:53:46.891324 28149 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="7a3b4c6b1768e8d5ad64ec3d49b0ef5a758c7b08b68da0b9f9604043050a5df9" exitCode=0 Mar 13 12:53:46.891365 master-0 kubenswrapper[28149]: I0313 12:53:46.891356 28149 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="002602ae7257927c6d84d79f7abb72d049dbc2180d8e5879043fea377ec86806" exitCode=0 Mar 13 12:53:46.891365 master-0 kubenswrapper[28149]: I0313 12:53:46.891367 28149 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="191d6b42f790fa129a37efd43f7471d2dd1f86d99afc82c180f797e065b49aad" exitCode=0 Mar 13 12:53:46.893371 master-0 kubenswrapper[28149]: I0313 12:53:46.893327 28149 generic.go:334] "Generic (PLEG): container finished" podID="00d2e134-62bb-4181-aa0a-22c9b9755b10" containerID="1b3f3325d5e04c56ba72e3fc00c285b339f3ca147fcedd9041b736950ddeb5fa" exitCode=0 Mar 13 12:53:46.895834 master-0 kubenswrapper[28149]: I0313 12:53:46.895806 28149 generic.go:334] "Generic (PLEG): container finished" podID="8c62b15f-001a-4b64-b85f-348aefde5d1b" containerID="0c1cf11fba8779c80d0da5e273c773daa5eb397179aa4efedaa5ea11988b99ed" exitCode=0 Mar 13 12:53:46.899860 master-0 kubenswrapper[28149]: I0313 12:53:46.899823 28149 generic.go:334] "Generic (PLEG): container finished" podID="a454234a-6c8e-4916-81e8-c9e66cec9d31" containerID="f12fef74127c1c2b2f8ceb210e754cc92619ab36c1f145fe9d244f8d84cfb88c" exitCode=0 Mar 13 12:53:46.902891 master-0 kubenswrapper[28149]: I0313 12:53:46.902864 28149 generic.go:334] "Generic (PLEG): container finished" podID="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" containerID="dc8ec1aed61fa783f1383f45771cb4136de885100e0460aa1df476073926f5af" exitCode=0 Mar 13 12:53:46.907172 master-0 kubenswrapper[28149]: I0313 12:53:46.907123 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-cwl2p_b12a6f33-70df-4832-ac3b-0d2b94125fbf/machine-approver-controller/0.log" Mar 13 12:53:46.908320 master-0 kubenswrapper[28149]: I0313 12:53:46.908274 28149 generic.go:334] "Generic (PLEG): container finished" podID="b12a6f33-70df-4832-ac3b-0d2b94125fbf" containerID="bf350ea0de070f0fd26919325b63ec00154a2596f691d915b23dc9183ce79b89" exitCode=255 Mar 13 12:53:46.911675 master-0 kubenswrapper[28149]: I0313 12:53:46.911645 28149 generic.go:334] "Generic (PLEG): container finished" podID="5ae41cff-0949-47f8-aae9-ae133191476d" containerID="2a4481a18e7aed734ae4a2d67eeeb008d6aeba24bc7223a49b0d6a3791cd0e5c" exitCode=0 Mar 13 12:53:46.936285 master-0 kubenswrapper[28149]: I0313 12:53:46.922875 28149 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="65d8ab343a6c8c9cdae0b29379d80db7bbdfeeeb082bcdc9935f85db242121e8" exitCode=0 Mar 13 12:53:46.936285 master-0 kubenswrapper[28149]: I0313 12:53:46.925774 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-t8fb4_f0803181-4e37-43fa-8ddc-9c76d3f61817/openshift-config-operator/3.log" Mar 13 12:53:46.936285 master-0 kubenswrapper[28149]: I0313 12:53:46.926077 28149 generic.go:334] "Generic (PLEG): container finished" podID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerID="a6263b46ef0468012ae2a42f311e9cac52e2e484751651c3b1983eca4c709f1f" exitCode=255 Mar 13 12:53:46.936285 master-0 kubenswrapper[28149]: I0313 12:53:46.926094 28149 generic.go:334] "Generic (PLEG): container finished" podID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerID="876f570e7bca1677304688ecd8e1a442c714ddc31318f4b0812aca0943ba9d82" exitCode=0 Mar 13 12:53:46.941296 master-0 kubenswrapper[28149]: E0313 12:53:46.937171 28149 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:53:46.947262 master-0 kubenswrapper[28149]: I0313 12:53:46.947229 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg_00d8a21b-701c-4334-9dda-34c28b417f42/config-sync-controllers/0.log" Mar 13 12:53:46.947995 master-0 kubenswrapper[28149]: I0313 12:53:46.947968 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg_00d8a21b-701c-4334-9dda-34c28b417f42/cluster-cloud-controller-manager/0.log" Mar 13 12:53:46.948907 master-0 kubenswrapper[28149]: I0313 12:53:46.948854 28149 generic.go:334] "Generic (PLEG): container finished" podID="00d8a21b-701c-4334-9dda-34c28b417f42" containerID="f7bdd6f14cd7d876f03cc0e565ef27ecd2cd6f1309a345b7b4c1e4b2f6e38eb4" exitCode=1 Mar 13 12:53:46.948907 master-0 kubenswrapper[28149]: I0313 12:53:46.948892 28149 generic.go:334] "Generic (PLEG): container finished" podID="00d8a21b-701c-4334-9dda-34c28b417f42" containerID="fb3e994e087a482374a8017dea545f1ddec09a849b0d0cb7b635b7b86e084f9a" exitCode=1 Mar 13 12:53:46.952430 master-0 kubenswrapper[28149]: I0313 12:53:46.952378 28149 generic.go:334] "Generic (PLEG): container finished" podID="72ba330e-35ca-4d05-8641-a880bf30c0e7" containerID="1af7a53388bbd243cf9640d283230185be1782a2bdb43e5850dd6d341044a303" exitCode=0 Mar 13 12:53:46.954960 master-0 kubenswrapper[28149]: I0313 12:53:46.954936 28149 generic.go:334] "Generic (PLEG): container finished" podID="c4477be6-bcff-407a-8033-b005e19bf5d6" containerID="19df84242808542fbc20d31e0f31a46482d39271b53107a4006c786dc0871be1" exitCode=0 Mar 13 12:53:46.957848 master-0 kubenswrapper[28149]: I0313 12:53:46.957818 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-sqndx_d44112d1-b2a5-4b8d-b74d-1e91638508d5/cluster-autoscaler-operator/0.log" Mar 13 12:53:46.958290 master-0 kubenswrapper[28149]: I0313 12:53:46.958239 28149 generic.go:334] "Generic (PLEG): container finished" podID="d44112d1-b2a5-4b8d-b74d-1e91638508d5" containerID="aeb8cd6b223367e97ad7707f8724ad7c61808803218a16a895fbd3c7f77d6e4e" exitCode=255 Mar 13 12:53:46.964968 master-0 kubenswrapper[28149]: I0313 12:53:46.964854 28149 generic.go:334] "Generic (PLEG): container finished" podID="1670a1d9-46a3-4d25-9dd1-43a08e2759c7" containerID="a286539a5f3b6d8dbf769c0d114494a7685625beed846cc4c4f2272b91586aab" exitCode=0 Mar 13 12:53:46.968825 master-0 kubenswrapper[28149]: I0313 12:53:46.968762 28149 generic.go:334] "Generic (PLEG): container finished" podID="887d261f-d07f-4ef0-a230-6568f47acf4d" containerID="ac30e49a3ae0e3ef59ed9c3728ae1c26bf004ec3b0fe4cf00ec315598faa9cf4" exitCode=0 Mar 13 12:53:46.968825 master-0 kubenswrapper[28149]: I0313 12:53:46.968809 28149 generic.go:334] "Generic (PLEG): container finished" podID="887d261f-d07f-4ef0-a230-6568f47acf4d" containerID="5cd273040496c4efd233900f344ee1edf468c14a89e07cdd24f71287c4f355e0" exitCode=0 Mar 13 12:53:46.968825 master-0 kubenswrapper[28149]: I0313 12:53:46.968820 28149 generic.go:334] "Generic (PLEG): container finished" podID="887d261f-d07f-4ef0-a230-6568f47acf4d" containerID="531c8b5824f7a1f7f686e430cb7bccc435fffb1f3a305f83070f80c2e1535620" exitCode=0 Mar 13 12:53:46.971020 master-0 kubenswrapper[28149]: I0313 12:53:46.970939 28149 generic.go:334] "Generic (PLEG): container finished" podID="0da84bb7-e936-49a0-96b5-614a1305d6a4" containerID="e0b901efadc576656657aa4dea0a09b5c987c11cdc88e24aaeef0848d60cd3b7" exitCode=0 Mar 13 12:53:46.979538 master-0 kubenswrapper[28149]: I0313 12:53:46.979501 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-mjxcz_d5f63b6b-990a-444b-a954-d718036f2f6c/machine-api-operator/0.log" Mar 13 12:53:46.979941 master-0 kubenswrapper[28149]: I0313 12:53:46.979897 28149 generic.go:334] "Generic (PLEG): container finished" podID="d5f63b6b-990a-444b-a954-d718036f2f6c" containerID="a1bfd1c6ad70388a89e3729992c8e63cc9ebf64d39d05c00f30ae59118fb80de" exitCode=255 Mar 13 12:53:46.982785 master-0 kubenswrapper[28149]: I0313 12:53:46.982754 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-rfqb9_d5a19b80-d488-46d3-a4a8-0b80361077e1/olm-operator/0.log" Mar 13 12:53:46.982956 master-0 kubenswrapper[28149]: I0313 12:53:46.982801 28149 generic.go:334] "Generic (PLEG): container finished" podID="d5a19b80-d488-46d3-a4a8-0b80361077e1" containerID="47e1707cfebdcd64e29e4d18bf48d4efe18567479faf12290a7bcd51f3b4d7e2" exitCode=1 Mar 13 12:53:46.985769 master-0 kubenswrapper[28149]: I0313 12:53:46.985606 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-cz8pc_3020d236-03e0-4916-97dd-f1085632ca43/cluster-node-tuning-operator/0.log" Mar 13 12:53:46.985769 master-0 kubenswrapper[28149]: I0313 12:53:46.985644 28149 generic.go:334] "Generic (PLEG): container finished" podID="3020d236-03e0-4916-97dd-f1085632ca43" containerID="89639adb88716cbb87bdb25b40c5ec231bc4f7820ddcadae78f527661f5a5581" exitCode=1 Mar 13 12:53:46.987640 master-0 kubenswrapper[28149]: E0313 12:53:46.987617 28149 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 12:53:46.988445 master-0 kubenswrapper[28149]: I0313 12:53:46.988416 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-tlnkd_10944f9c-8ce9-44e6-9c36-a0ea19d8cae3/catalog-operator/0.log" Mar 13 12:53:46.988523 master-0 kubenswrapper[28149]: I0313 12:53:46.988495 28149 generic.go:334] "Generic (PLEG): container finished" podID="10944f9c-8ce9-44e6-9c36-a0ea19d8cae3" containerID="9d05f0d44d2a573355b6b4eea02a702f641e31e420669a5e155b6a442793e880" exitCode=1 Mar 13 12:53:46.993733 master-0 kubenswrapper[28149]: I0313 12:53:46.993689 28149 generic.go:334] "Generic (PLEG): container finished" podID="d6226325-c4d9-497e-8d19-a71adc66c5ac" containerID="cf1959de89eea014cb32ef2948333cb70b4954efbb9bc7376a990fcbbdb918ce" exitCode=0 Mar 13 12:53:46.997549 master-0 kubenswrapper[28149]: I0313 12:53:46.997494 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-dv8rj_915aabfe-1071-4bfc-b291-424304dfe7d8/manager/0.log" Mar 13 12:53:46.997713 master-0 kubenswrapper[28149]: I0313 12:53:46.997568 28149 generic.go:334] "Generic (PLEG): container finished" podID="915aabfe-1071-4bfc-b291-424304dfe7d8" containerID="ac8d5b7e2908dcba283cf9e9752ebfd8422326f0c9542918621c9dc214262a7d" exitCode=1 Mar 13 12:53:46.999735 master-0 kubenswrapper[28149]: I0313 12:53:46.999704 28149 generic.go:334] "Generic (PLEG): container finished" podID="ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118" containerID="69f6736e401004be8e5844a5f9b7891b28a4228a05eb13fc36ff3b64b8740138" exitCode=0 Mar 13 12:53:47.004839 master-0 kubenswrapper[28149]: I0313 12:53:47.004795 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-l6jp5_317af639-269e-4163-8e24-fcea468b9352/cluster-baremetal-operator/2.log" Mar 13 12:53:47.005185 master-0 kubenswrapper[28149]: I0313 12:53:47.005148 28149 generic.go:334] "Generic (PLEG): container finished" podID="317af639-269e-4163-8e24-fcea468b9352" containerID="15cedcb1b8553ec2f730223913ef265bc163bb67b8745c32aa558c39edcca0ac" exitCode=1 Mar 13 12:53:47.009778 master-0 kubenswrapper[28149]: I0313 12:53:47.009718 28149 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="ba0afcdaf159bdee5cad84caecac2caf230f2beacc241756ab48e77be0ee5ebb" exitCode=0 Mar 13 12:53:47.011778 master-0 kubenswrapper[28149]: I0313 12:53:47.011689 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_bfabb495-1707-4c3d-b00e-2f3b2976fb92/installer/0.log" Mar 13 12:53:47.011778 master-0 kubenswrapper[28149]: I0313 12:53:47.011738 28149 generic.go:334] "Generic (PLEG): container finished" podID="bfabb495-1707-4c3d-b00e-2f3b2976fb92" containerID="d8cf37e4c8a527d04eff5203f40779f993e328715e0f8f8ef7b2ff90bad966cf" exitCode=1 Mar 13 12:53:47.017486 master-0 kubenswrapper[28149]: I0313 12:53:47.017436 28149 generic.go:334] "Generic (PLEG): container finished" podID="4f9e6618-62b5-4181-b545-211461811140" containerID="da77080b839f8955665824806fb0d5eb5b65bd0dc7a075af96258d22af1ed733" exitCode=0 Mar 13 12:53:47.017486 master-0 kubenswrapper[28149]: I0313 12:53:47.017464 28149 generic.go:334] "Generic (PLEG): container finished" podID="4f9e6618-62b5-4181-b545-211461811140" containerID="3e929dd0246b5ba2e1233ca2d7cf4594e87b4dbf9604555efeef3c1d42856882" exitCode=0 Mar 13 12:53:47.019682 master-0 kubenswrapper[28149]: I0313 12:53:47.019653 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-669qk_3d653e1a-5903-4a02-9357-df145f028c0d/package-server-manager/0.log" Mar 13 12:53:47.020090 master-0 kubenswrapper[28149]: I0313 12:53:47.020063 28149 generic.go:334] "Generic (PLEG): container finished" podID="3d653e1a-5903-4a02-9357-df145f028c0d" containerID="baf23d87752ea57aa0879a0f3cabb3d54da65ab6c1d69c34a044b8dc1883ed70" exitCode=1 Mar 13 12:53:47.028078 master-0 kubenswrapper[28149]: I0313 12:53:47.028004 28149 generic.go:334] "Generic (PLEG): container finished" podID="4dd0fc2f-f2ee-4447-a747-04a178288cf0" containerID="bc5551e07868e81855eed958b9e358bd0715e00cec588a7af2b93942471edb38" exitCode=0 Mar 13 12:53:47.034739 master-0 kubenswrapper[28149]: I0313 12:53:47.034693 28149 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="838f1203bfc2909f5be268d039e5903c4aada457bcd573b0395f4215bfc0c446" exitCode=0 Mar 13 12:53:47.034739 master-0 kubenswrapper[28149]: I0313 12:53:47.034711 28149 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="f3be2171b1690f9bafcc889e55d83ff1a441baaed77d90117edebfc3db8ff2b9" exitCode=0 Mar 13 12:53:47.034739 master-0 kubenswrapper[28149]: I0313 12:53:47.034717 28149 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="a3279720d4c802c349d222cf1b96260384211d9adc25c84b50972505c95ca211" exitCode=0 Mar 13 12:53:47.037285 master-0 kubenswrapper[28149]: E0313 12:53:47.037264 28149 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:53:47.040461 master-0 kubenswrapper[28149]: I0313 12:53:47.040381 28149 generic.go:334] "Generic (PLEG): container finished" podID="e01de416-3de5-4357-a84e-f8eabb15a500" containerID="36c8eace8178c56031aee9f74c55f1e387a62f97359664e0fd2729176c22f3cb" exitCode=0 Mar 13 12:53:47.047580 master-0 kubenswrapper[28149]: I0313 12:53:47.047543 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_8f8543a5-1639-4140-a18d-8b0c96821bae/installer/0.log" Mar 13 12:53:47.047742 master-0 kubenswrapper[28149]: I0313 12:53:47.047598 28149 generic.go:334] "Generic (PLEG): container finished" podID="8f8543a5-1639-4140-a18d-8b0c96821bae" containerID="a813a663a398e05e616fe550c674646a6498ff5442d82cbd7adbf48594546e77" exitCode=1 Mar 13 12:53:47.050063 master-0 kubenswrapper[28149]: I0313 12:53:47.050039 28149 generic.go:334] "Generic (PLEG): container finished" podID="185a10f7-2a4b-4171-b10d-4614cb8671bd" containerID="5cd4fe9ce3ca6e40b66f822008735eb91b0372a4e062d161fec91212083d1dbe" exitCode=0 Mar 13 12:53:47.052382 master-0 kubenswrapper[28149]: I0313 12:53:47.052336 28149 generic.go:334] "Generic (PLEG): container finished" podID="03479326-c13f-40bb-9ed2-580bb05917a7" containerID="69ec82e15f99ac8946fd6f0ae65cca8b0db2d9d210589323567d60bcf1d59e01" exitCode=0 Mar 13 12:53:47.137378 master-0 kubenswrapper[28149]: E0313 12:53:47.137340 28149 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:53:47.189818 master-0 kubenswrapper[28149]: I0313 12:53:47.189779 28149 manager.go:324] Recovery completed Mar 13 12:53:47.237989 master-0 kubenswrapper[28149]: E0313 12:53:47.237870 28149 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:53:47.290493 master-0 kubenswrapper[28149]: I0313 12:53:47.290448 28149 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:53:47.292830 master-0 kubenswrapper[28149]: I0313 12:53:47.292790 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:53:47.292830 master-0 kubenswrapper[28149]: I0313 12:53:47.292822 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:53:47.292830 master-0 kubenswrapper[28149]: I0313 12:53:47.292830 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:53:47.299009 master-0 kubenswrapper[28149]: I0313 12:53:47.298966 28149 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 13 12:53:47.299009 master-0 kubenswrapper[28149]: I0313 12:53:47.298987 28149 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 12:53:47.299009 master-0 kubenswrapper[28149]: I0313 12:53:47.299008 28149 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:53:47.299360 master-0 kubenswrapper[28149]: I0313 12:53:47.299330 28149 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 12:53:47.299360 master-0 kubenswrapper[28149]: I0313 12:53:47.299348 28149 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 12:53:47.299455 master-0 kubenswrapper[28149]: I0313 12:53:47.299368 28149 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 13 12:53:47.299455 master-0 kubenswrapper[28149]: I0313 12:53:47.299375 28149 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 13 12:53:47.299455 master-0 kubenswrapper[28149]: I0313 12:53:47.299381 28149 policy_none.go:49] "None policy: Start" Mar 13 12:53:47.312953 master-0 kubenswrapper[28149]: I0313 12:53:47.312901 28149 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 13 12:53:47.313128 master-0 kubenswrapper[28149]: I0313 12:53:47.312969 28149 state_mem.go:35] "Initializing new in-memory state store" Mar 13 12:53:47.313240 master-0 kubenswrapper[28149]: I0313 12:53:47.313227 28149 state_mem.go:75] "Updated machine memory state" Mar 13 12:53:47.313240 master-0 kubenswrapper[28149]: I0313 12:53:47.313240 28149 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 13 12:53:47.324673 master-0 kubenswrapper[28149]: I0313 12:53:47.324625 28149 manager.go:334] "Starting Device Plugin manager" Mar 13 12:53:47.324913 master-0 kubenswrapper[28149]: I0313 12:53:47.324693 28149 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 13 12:53:47.324913 master-0 kubenswrapper[28149]: I0313 12:53:47.324707 28149 server.go:79] "Starting device plugin registration server" Mar 13 12:53:47.325230 master-0 kubenswrapper[28149]: I0313 12:53:47.325200 28149 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 12:53:47.325311 master-0 kubenswrapper[28149]: I0313 12:53:47.325221 28149 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 12:53:47.325575 master-0 kubenswrapper[28149]: I0313 12:53:47.325486 28149 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 13 12:53:47.331182 master-0 kubenswrapper[28149]: I0313 12:53:47.325766 28149 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 13 12:53:47.331182 master-0 kubenswrapper[28149]: I0313 12:53:47.325790 28149 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 12:53:47.332552 master-0 kubenswrapper[28149]: E0313 12:53:47.332254 28149 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 12:53:47.392112 master-0 kubenswrapper[28149]: I0313 12:53:47.392002 28149 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0"] Mar 13 12:53:47.392431 master-0 kubenswrapper[28149]: I0313 12:53:47.392215 28149 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:53:47.395114 master-0 kubenswrapper[28149]: I0313 12:53:47.395075 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:53:47.395223 master-0 kubenswrapper[28149]: I0313 12:53:47.395121 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:53:47.395223 master-0 kubenswrapper[28149]: I0313 12:53:47.395158 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:53:47.395300 master-0 kubenswrapper[28149]: I0313 12:53:47.395283 28149 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:53:47.395507 master-0 kubenswrapper[28149]: I0313 12:53:47.395460 28149 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:53:47.404212 master-0 kubenswrapper[28149]: I0313 12:53:47.402171 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:53:47.404212 master-0 kubenswrapper[28149]: I0313 12:53:47.402269 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:53:47.404212 master-0 kubenswrapper[28149]: I0313 12:53:47.402288 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:53:47.404212 master-0 kubenswrapper[28149]: I0313 12:53:47.403094 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:53:47.404212 master-0 kubenswrapper[28149]: I0313 12:53:47.403160 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:53:47.404212 master-0 kubenswrapper[28149]: I0313 12:53:47.403180 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:53:47.404212 master-0 kubenswrapper[28149]: I0313 12:53:47.403368 28149 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:53:47.404612 master-0 kubenswrapper[28149]: I0313 12:53:47.404351 28149 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:53:47.410640 master-0 kubenswrapper[28149]: I0313 12:53:47.410581 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:53:47.410640 master-0 kubenswrapper[28149]: I0313 12:53:47.410619 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:53:47.410640 master-0 kubenswrapper[28149]: I0313 12:53:47.410627 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:53:47.413023 master-0 kubenswrapper[28149]: I0313 12:53:47.412638 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:53:47.413023 master-0 kubenswrapper[28149]: I0313 12:53:47.412689 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:53:47.413023 master-0 kubenswrapper[28149]: I0313 12:53:47.412700 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:53:47.413023 master-0 kubenswrapper[28149]: I0313 12:53:47.412855 28149 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:53:47.413023 master-0 kubenswrapper[28149]: I0313 12:53:47.412982 28149 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:53:47.429287 master-0 kubenswrapper[28149]: I0313 12:53:47.427898 28149 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:53:47.431571 master-0 kubenswrapper[28149]: I0313 12:53:47.431071 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:53:47.431571 master-0 kubenswrapper[28149]: I0313 12:53:47.431115 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:53:47.431571 master-0 kubenswrapper[28149]: I0313 12:53:47.431133 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:53:47.431571 master-0 kubenswrapper[28149]: I0313 12:53:47.431280 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:53:47.431571 master-0 kubenswrapper[28149]: I0313 12:53:47.431328 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:53:47.431571 master-0 kubenswrapper[28149]: I0313 12:53:47.431342 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:53:47.431571 master-0 kubenswrapper[28149]: I0313 12:53:47.431477 28149 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:53:47.431571 master-0 kubenswrapper[28149]: I0313 12:53:47.431522 28149 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:53:47.444187 master-0 kubenswrapper[28149]: I0313 12:53:47.442394 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:53:47.444187 master-0 kubenswrapper[28149]: I0313 12:53:47.442438 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:53:47.444187 master-0 kubenswrapper[28149]: I0313 12:53:47.442450 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:53:47.444187 master-0 kubenswrapper[28149]: I0313 12:53:47.442596 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:53:47.444187 master-0 kubenswrapper[28149]: I0313 12:53:47.442618 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:53:47.444187 master-0 kubenswrapper[28149]: I0313 12:53:47.442630 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:53:47.444187 master-0 kubenswrapper[28149]: I0313 12:53:47.442694 28149 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:53:47.444187 master-0 kubenswrapper[28149]: I0313 12:53:47.442600 28149 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:53:47.444187 master-0 kubenswrapper[28149]: I0313 12:53:47.443448 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:53:47.444187 master-0 kubenswrapper[28149]: I0313 12:53:47.443480 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:53:47.444187 master-0 kubenswrapper[28149]: I0313 12:53:47.443489 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:53:47.444187 master-0 kubenswrapper[28149]: I0313 12:53:47.443510 28149 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:53:47.446240 master-0 kubenswrapper[28149]: I0313 12:53:47.445427 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:53:47.446240 master-0 kubenswrapper[28149]: I0313 12:53:47.445457 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:53:47.446240 master-0 kubenswrapper[28149]: I0313 12:53:47.445467 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446405 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446425 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446438 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446578 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b912cc2fb7f1246b6e0fb7957cb5c167f818087772406214ca1bd3f180298fb" Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446599 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e1088e0df5495c11b184ce6c8248adb0411207dd090af8621e1253e288aee81" Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446625 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbb3fd1b1972cab7aabe9a34a316fc6619100acdef1d341abf069e3ac4eab0ff" Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446637 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"4ea900f27c90a68c3b8cd2345d580f77e20ef846c8a749fe70f5724228e5cc04"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446692 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"b046991449e1d420ea17d254f8c05faec355e4aacc147507b98a3f095fa7ff11"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446716 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"958b1ab7ab943f0d9820d78ce8605298936c74cbbe3326599eac945aeec4ecce"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446728 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"9c887f2b6cfcfcc1f3ea186daee81cbe3bce3c155cfd4e9bbac88f712c489339"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446740 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"d97124951202d97d2b090945a6d5c9c5add42850ba499052ed07d95631932324"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446753 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"a13f1b34007cf32fe962f7d50d2988f0f66eb3022aee3b3a767d84bde6caed30"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446801 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdff0a0b2aea82bac9a3ab64499e43b6fe8e459f15bf1c50fed1c0bf1762fda9" Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446827 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="495a72687402da10550aa60f4b41a9bc310b020e43ddbbb5f831586412f05db8" Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446912 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="504639ecf4788ce4c267fd64fb378348d1c51285c4c07623bf66e15e61133a68" Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446969 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"24a899f1f40a16e8df69e1053ad63adddd8eadeaaa916f3d6de11e212d873278"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446982 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"c82691e79572302ede8d7dd4b4262e703b38e5a73e04bef601466f9e50d78d7d"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.446995 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"912ff796850453c01df7cbeecc45cdb10c34a7fb4ccc08e76183a5f55eb1bcb5"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.447006 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"10e1034aafb2cd99b68fa2c04089a546d6fd7367b27440b5229a0245c44b9f38"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.447016 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"5cc34aac9149d80ee13d05fb99b57b8557bc192e4d7f099ae7781999fb6ddcb6"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.447025 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"7a3b4c6b1768e8d5ad64ec3d49b0ef5a758c7b08b68da0b9f9604043050a5df9"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.447038 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"002602ae7257927c6d84d79f7abb72d049dbc2180d8e5879043fea377ec86806"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.447051 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"191d6b42f790fa129a37efd43f7471d2dd1f86d99afc82c180f797e065b49aad"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.447061 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"aac9e43b541ff8c2c2bfb86003c0c12881f81493b0818cd60c9ba62d916d93a2"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.447073 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd20eff6c17b5d26b931e6d943bd09e05bef7d7025ee5b4bd9d525e64901dc81" Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.447113 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"9e6d07d04707c83d5d761b1f7ed58474303d364667db54ae899df77b8c71b52d"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.447124 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"d787d856f0918a254ce3c937e9007cce5d60df73e45e63a2b9e3c69dda9b0e44"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.447150 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"b5e0746e4832ff55bf614aa770ddd19a9a9fc08ca7f1ca173dc0718a80c8990d"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.447163 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerDied","Data":"65d8ab343a6c8c9cdae0b29379d80db7bbdfeeeb082bcdc9935f85db242121e8"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.447178 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"2a520ce1540e4505903e0c09b3c7ff382c5a6347945280110eeacb275245a884"} Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.447207 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aab59fe84d74f1f2dbe3af4167877250fbae9e62f4ef0e21a64f79bf2216fbcc" Mar 13 12:53:47.447221 master-0 kubenswrapper[28149]: I0313 12:53:47.447237 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="432ae93d7929ff1377b0a32b34b9fd0f282a4c2a377afc25391c4f66c1a92ec6" Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447305 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9b24fda1c2e55a08607764d7b9b24355","Type":"ContainerStarted","Data":"552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451"} Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447317 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9b24fda1c2e55a08607764d7b9b24355","Type":"ContainerStarted","Data":"5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f"} Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447327 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9b24fda1c2e55a08607764d7b9b24355","Type":"ContainerStarted","Data":"ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c"} Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447337 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9b24fda1c2e55a08607764d7b9b24355","Type":"ContainerStarted","Data":"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11"} Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447347 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9b24fda1c2e55a08607764d7b9b24355","Type":"ContainerStarted","Data":"2b2ef2ddaedb81fecd10454e7de227fc33e0631466b7f1d7f0c388f2e1883f04"} Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447367 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"22eebd4722aca51c26c5c5c4b620534c95d14ee25cb5dca7baa2946eaaa18f49"} Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447379 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"52264b4378a4f3ba83334945450ce98ac9bedab1c6c9485cb885bc9488d52471"} Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447392 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"8ea4f4f1bc69f85c977580ddac21514a71e7c8a91de12b17cbd00d640490e4d3"} Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447406 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"b498f079133d2a2077770b172efd3507414d1897ced1774403305339c6337d85"} Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447417 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"c16c28a17a2035273ad3cbe98ed9a765284a80f578c8eb0748ccdf8c0dbcc66a"} Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447429 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerDied","Data":"ba0afcdaf159bdee5cad84caecac2caf230f2beacc241756ab48e77be0ee5ebb"} Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447441 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"dc74469df6e780c8e9e2827ef289651444a1ff65c5b17d5937b4448f9addb191"} Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447454 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8de98c25946553f78d0d15d3d39442b1f1f340c231f6a8d5c64835e897795dde" Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447513 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e81dca123a6f2f889ce66cb5735ec25a6e1c65abbd235bf8c5081fda6184b21" Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447528 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90d62dc62426f86839fab6dfcb69950974991422a3bbd33e6f3fd2c0bd1c8644" Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447539 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8aea90deac8c57ee0f5fc4e46276af696e5807ecbb4598ad2a67ae2024be4b0" Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447550 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e1bcdaf83648cf25ec570e9e2bb43dc99c079203b2fc846498f786f34dd1ec" Mar 13 12:53:47.448216 master-0 kubenswrapper[28149]: I0313 12:53:47.447657 28149 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:53:47.449602 master-0 kubenswrapper[28149]: I0313 12:53:47.449574 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:53:47.449602 master-0 kubenswrapper[28149]: I0313 12:53:47.449600 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:53:47.449685 master-0 kubenswrapper[28149]: I0313 12:53:47.449609 28149 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:53:51.623396 master-0 kubenswrapper[28149]: I0313 12:53:51.623318 28149 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 12:53:51.623396 master-0 kubenswrapper[28149]: I0313 12:53:51.623393 28149 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 12:53:51.639563 master-0 kubenswrapper[28149]: I0313 12:53:51.639516 28149 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 12:53:51.683798 master-0 kubenswrapper[28149]: I0313 12:53:51.683746 28149 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 13 12:53:51.690017 master-0 kubenswrapper[28149]: I0313 12:53:51.689969 28149 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 12:53:51.784376 master-0 kubenswrapper[28149]: I0313 12:53:51.784315 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:51.784376 master-0 kubenswrapper[28149]: I0313 12:53:51.784359 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:51.784376 master-0 kubenswrapper[28149]: I0313 12:53:51.784382 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.784650 master-0 kubenswrapper[28149]: I0313 12:53:51.784403 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9b24fda1c2e55a08607764d7b9b24355\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:51.784650 master-0 kubenswrapper[28149]: I0313 12:53:51.784419 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.784650 master-0 kubenswrapper[28149]: I0313 12:53:51.784509 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.784650 master-0 kubenswrapper[28149]: I0313 12:53:51.784571 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:51.784650 master-0 kubenswrapper[28149]: I0313 12:53:51.784592 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:53:51.784650 master-0 kubenswrapper[28149]: I0313 12:53:51.784608 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:53:51.784650 master-0 kubenswrapper[28149]: I0313 12:53:51.784628 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:51.784650 master-0 kubenswrapper[28149]: I0313 12:53:51.784644 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.784957 master-0 kubenswrapper[28149]: I0313 12:53:51.784664 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.784957 master-0 kubenswrapper[28149]: I0313 12:53:51.784686 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:51.784957 master-0 kubenswrapper[28149]: I0313 12:53:51.784704 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:51.784957 master-0 kubenswrapper[28149]: I0313 12:53:51.784726 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:51.784957 master-0 kubenswrapper[28149]: I0313 12:53:51.784748 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9b24fda1c2e55a08607764d7b9b24355\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:51.784957 master-0 kubenswrapper[28149]: I0313 12:53:51.784768 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:51.784957 master-0 kubenswrapper[28149]: I0313 12:53:51.784786 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:51.784957 master-0 kubenswrapper[28149]: I0313 12:53:51.784804 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:51.784957 master-0 kubenswrapper[28149]: I0313 12:53:51.784824 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.885455 master-0 kubenswrapper[28149]: I0313 12:53:51.885089 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:51.885455 master-0 kubenswrapper[28149]: I0313 12:53:51.885239 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:51.885455 master-0 kubenswrapper[28149]: I0313 12:53:51.885260 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:51.885455 master-0 kubenswrapper[28149]: I0313 12:53:51.885279 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.885455 master-0 kubenswrapper[28149]: I0313 12:53:51.885158 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:51.885455 master-0 kubenswrapper[28149]: I0313 12:53:51.885301 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:51.885455 master-0 kubenswrapper[28149]: I0313 12:53:51.885348 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:51.885455 master-0 kubenswrapper[28149]: I0313 12:53:51.885360 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:51.885455 master-0 kubenswrapper[28149]: I0313 12:53:51.885376 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.885455 master-0 kubenswrapper[28149]: I0313 12:53:51.885404 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.885455 master-0 kubenswrapper[28149]: I0313 12:53:51.885453 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.885455 master-0 kubenswrapper[28149]: I0313 12:53:51.885459 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885455 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885477 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885503 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885525 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885615 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885683 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885723 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885756 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885785 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885806 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885829 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9b24fda1c2e55a08607764d7b9b24355\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885849 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885871 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885894 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885913 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885938 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9b24fda1c2e55a08607764d7b9b24355\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885961 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.885996 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.886028 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.886060 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.886097 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.886128 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.886212 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9b24fda1c2e55a08607764d7b9b24355\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.886243 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.886270 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.886297 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.886324 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:51.886523 master-0 kubenswrapper[28149]: I0313 12:53:51.886351 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9b24fda1c2e55a08607764d7b9b24355\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:51.904236 master-0 kubenswrapper[28149]: I0313 12:53:51.904195 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:51.904333 master-0 kubenswrapper[28149]: I0313 12:53:51.904292 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:51.928924 master-0 kubenswrapper[28149]: I0313 12:53:51.928886 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:51.951757 master-0 kubenswrapper[28149]: I0313 12:53:51.951690 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 13 12:53:51.988493 master-0 kubenswrapper[28149]: I0313 12:53:51.988436 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 13 12:53:52.092787 master-0 kubenswrapper[28149]: I0313 12:53:52.092737 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-check-endpoints/0.log" Mar 13 12:53:52.094166 master-0 kubenswrapper[28149]: I0313 12:53:52.094121 28149 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="22eebd4722aca51c26c5c5c4b620534c95d14ee25cb5dca7baa2946eaaa18f49" exitCode=255 Mar 13 12:53:52.095601 master-0 kubenswrapper[28149]: I0313 12:53:52.095080 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerDied","Data":"22eebd4722aca51c26c5c5c4b620534c95d14ee25cb5dca7baa2946eaaa18f49"} Mar 13 12:53:52.115221 master-0 kubenswrapper[28149]: E0313 12:53:52.114644 28149 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:52.115221 master-0 kubenswrapper[28149]: E0313 12:53:52.114674 28149 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:52.115221 master-0 kubenswrapper[28149]: E0313 12:53:52.114789 28149 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:52.115221 master-0 kubenswrapper[28149]: E0313 12:53:52.114643 28149 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 13 12:53:52.115221 master-0 kubenswrapper[28149]: I0313 12:53:52.114940 28149 scope.go:117] "RemoveContainer" containerID="22eebd4722aca51c26c5c5c4b620534c95d14ee25cb5dca7baa2946eaaa18f49" Mar 13 12:53:52.115221 master-0 kubenswrapper[28149]: E0313 12:53:52.114986 28149 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:53:52.157365 master-0 kubenswrapper[28149]: I0313 12:53:52.157277 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:52.457128 master-0 kubenswrapper[28149]: I0313 12:53:52.457070 28149 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 13 12:53:52.457342 master-0 kubenswrapper[28149]: I0313 12:53:52.457197 28149 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 13 12:53:52.609805 master-0 kubenswrapper[28149]: I0313 12:53:52.609753 28149 apiserver.go:52] "Watching apiserver" Mar 13 12:53:52.640317 master-0 kubenswrapper[28149]: I0313 12:53:52.640266 28149 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 12:53:52.643270 master-0 kubenswrapper[28149]: I0313 12:53:52.643207 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n","openshift-ingress-operator/ingress-operator-677db989d6-ckl2j","openshift-kube-controller-manager/installer-3-master-0","openshift-kube-scheduler/installer-3-master-0","openshift-marketplace/certified-operators-p9csk","openshift-dns-operator/dns-operator-589895fbb7-mmwk7","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf","openshift-kube-storage-version-migrator/migrator-57ccdf9b5-7pcdp","openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx","openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw","assisted-installer/assisted-installer-controller-bqsgz","openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk","openshift-network-diagnostics/network-check-target-pnwsc","openshift-kube-scheduler/installer-5-master-0","openshift-multus/multus-bnn7n","openshift-ovn-kubernetes/ovnkube-node-h8fwp","openshift-service-ca/service-ca-84bfdbbb7f-4pksg","openshift-machine-config-operator/machine-config-server-6crtf","openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz","openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf","openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp","openshift-apiserver/apiserver-844bc54c88-vznst","openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c","openshift-kube-controller-manager/installer-1-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82","openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w","openshift-marketplace/community-operators-9x9vk","openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4","openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh","openshift-etcd/etcd-master-0","openshift-insights/insights-operator-8f89dfddd-vxk8z","openshift-kube-scheduler/installer-4-retry-1-master-0","openshift-kube-controller-manager/installer-2-master-0","openshift-machine-config-operator/machine-config-daemon-5h8rc","openshift-multus/multus-admission-controller-7769569c45-qz88j","openshift-network-diagnostics/network-check-source-7c67b67d47-5bb88","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn","openshift-dns/dns-default-m7k6m","openshift-ingress/router-default-79f8cd6fdd-wtf6j","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-network-operator/iptables-alerter-qz6pg","openshift-monitoring/metrics-server-567b9cf7f-cxnj2","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9","openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499","openshift-ingress-canary/ingress-canary-h8skx","openshift-kube-apiserver/kube-apiserver-master-0","openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm","openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz","openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n","openshift-monitoring/node-exporter-v4hdh","openshift-cluster-node-tuning-operator/tuned-6tlzf","openshift-etcd/installer-2-master-0","openshift-kube-apiserver/installer-1-master-0","openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj","openshift-network-operator/network-operator-7c649bf6d4-kh6n9","openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc","openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv","openshift-marketplace/redhat-marketplace-zh888","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj","openshift-etcd/installer-1-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg","openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2","openshift-dns/node-resolver-xpz47","openshift-kube-apiserver/installer-4-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj","openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5","openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj","openshift-network-node-identity/network-node-identity-qg8q5","openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg","openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4","openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-marketplace/redhat-operators-5czx2","openshift-multus/multus-additional-cni-plugins-78p2k","openshift-multus/network-metrics-daemon-r9lmb","openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj","openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd","openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g"] Mar 13 12:53:52.643454 master-0 kubenswrapper[28149]: I0313 12:53:52.643430 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-bqsgz" Mar 13 12:53:52.646203 master-0 kubenswrapper[28149]: I0313 12:53:52.646146 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:53:52.647852 master-0 kubenswrapper[28149]: I0313 12:53:52.647820 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:53:52.647945 master-0 kubenswrapper[28149]: I0313 12:53:52.647926 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 12:53:52.650849 master-0 kubenswrapper[28149]: I0313 12:53:52.650803 28149 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="34fee065-9e14-4e33-accf-5cf37f68d8c0" Mar 13 12:53:52.671342 master-0 kubenswrapper[28149]: I0313 12:53:52.664023 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 12:53:52.671342 master-0 kubenswrapper[28149]: I0313 12:53:52.664985 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:53:52.671342 master-0 kubenswrapper[28149]: I0313 12:53:52.666824 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 12:53:52.671342 master-0 kubenswrapper[28149]: I0313 12:53:52.667020 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:53:52.671342 master-0 kubenswrapper[28149]: I0313 12:53:52.667341 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:53:52.671342 master-0 kubenswrapper[28149]: I0313 12:53:52.668395 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 12:53:52.671342 master-0 kubenswrapper[28149]: I0313 12:53:52.668795 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 12:53:52.680877 master-0 kubenswrapper[28149]: I0313 12:53:52.680250 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 12:53:52.680877 master-0 kubenswrapper[28149]: I0313 12:53:52.680437 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:53:52.680877 master-0 kubenswrapper[28149]: I0313 12:53:52.680542 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 12:53:52.680877 master-0 kubenswrapper[28149]: I0313 12:53:52.680609 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 12:53:52.680877 master-0 kubenswrapper[28149]: I0313 12:53:52.680633 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 12:53:52.680877 master-0 kubenswrapper[28149]: I0313 12:53:52.680684 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 12:53:52.680877 master-0 kubenswrapper[28149]: I0313 12:53:52.680720 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 12:53:52.680877 master-0 kubenswrapper[28149]: I0313 12:53:52.680800 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 12:53:52.680877 master-0 kubenswrapper[28149]: I0313 12:53:52.680820 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 12:53:52.680877 master-0 kubenswrapper[28149]: I0313 12:53:52.680881 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 12:53:52.681579 master-0 kubenswrapper[28149]: I0313 12:53:52.681312 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:53:52.681579 master-0 kubenswrapper[28149]: I0313 12:53:52.681412 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 12:53:52.681579 master-0 kubenswrapper[28149]: I0313 12:53:52.681427 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 12:53:52.681579 master-0 kubenswrapper[28149]: I0313 12:53:52.681557 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 12:53:52.681750 master-0 kubenswrapper[28149]: I0313 12:53:52.681712 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 12:53:52.681750 master-0 kubenswrapper[28149]: I0313 12:53:52.681734 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 12:53:52.681901 master-0 kubenswrapper[28149]: I0313 12:53:52.681882 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 12:53:52.682025 master-0 kubenswrapper[28149]: I0313 12:53:52.681990 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 12:53:52.682106 master-0 kubenswrapper[28149]: I0313 12:53:52.682070 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 12:53:52.682217 master-0 kubenswrapper[28149]: I0313 12:53:52.682193 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 12:53:52.682364 master-0 kubenswrapper[28149]: I0313 12:53:52.682341 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 12:53:52.682426 master-0 kubenswrapper[28149]: I0313 12:53:52.682198 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683208 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.680439 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683387 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683413 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683506 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683540 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683578 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683611 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683627 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683643 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683682 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683694 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683722 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683754 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683784 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683799 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683816 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683542 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.683586 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.684721 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.684835 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.684923 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685006 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685050 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685188 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685208 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685315 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685338 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685455 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685473 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685486 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685577 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685614 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685654 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685705 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685751 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685827 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685838 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685863 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685949 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.686003 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.686051 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.686076 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.686180 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.686186 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.686212 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.686251 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.685949 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.686360 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.686494 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.686630 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.686694 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.686844 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.686889 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.687087 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 12:53:52.687064 master-0 kubenswrapper[28149]: I0313 12:53:52.687153 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 12:53:52.691808 master-0 kubenswrapper[28149]: I0313 12:53:52.689477 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 13 12:53:52.691808 master-0 kubenswrapper[28149]: I0313 12:53:52.690413 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 12:53:52.691808 master-0 kubenswrapper[28149]: I0313 12:53:52.690540 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:53:52.691808 master-0 kubenswrapper[28149]: I0313 12:53:52.690671 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 12:53:52.691808 master-0 kubenswrapper[28149]: I0313 12:53:52.691084 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:53:52.691808 master-0 kubenswrapper[28149]: I0313 12:53:52.691253 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 12:53:52.694667 master-0 kubenswrapper[28149]: I0313 12:53:52.692299 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 12:53:52.694667 master-0 kubenswrapper[28149]: I0313 12:53:52.693160 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 12:53:52.694667 master-0 kubenswrapper[28149]: I0313 12:53:52.693372 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 12:53:52.694667 master-0 kubenswrapper[28149]: I0313 12:53:52.693485 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 12:53:52.695207 master-0 kubenswrapper[28149]: I0313 12:53:52.695101 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:53:52.696266 master-0 kubenswrapper[28149]: I0313 12:53:52.695731 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 12:53:52.696266 master-0 kubenswrapper[28149]: I0313 12:53:52.695840 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:53:52.696266 master-0 kubenswrapper[28149]: I0313 12:53:52.696037 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:53:52.704786 master-0 kubenswrapper[28149]: I0313 12:53:52.704657 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 12:53:52.704786 master-0 kubenswrapper[28149]: I0313 12:53:52.704778 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 12:53:52.704987 master-0 kubenswrapper[28149]: I0313 12:53:52.704840 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 12:53:52.705016 master-0 kubenswrapper[28149]: I0313 12:53:52.705005 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 12:53:52.705176 master-0 kubenswrapper[28149]: I0313 12:53:52.705112 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 12:53:52.705984 master-0 kubenswrapper[28149]: I0313 12:53:52.705954 28149 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 13 12:53:52.716599 master-0 kubenswrapper[28149]: I0313 12:53:52.716563 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 12:53:52.716953 master-0 kubenswrapper[28149]: I0313 12:53:52.716591 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 12:53:52.720663 master-0 kubenswrapper[28149]: I0313 12:53:52.720605 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 12:53:52.722293 master-0 kubenswrapper[28149]: I0313 12:53:52.721785 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 12:53:52.732248 master-0 kubenswrapper[28149]: I0313 12:53:52.726469 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 12:53:52.732248 master-0 kubenswrapper[28149]: I0313 12:53:52.727850 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 13 12:53:52.732248 master-0 kubenswrapper[28149]: I0313 12:53:52.727925 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 12:53:52.741034 master-0 kubenswrapper[28149]: I0313 12:53:52.740431 28149 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 13 12:53:52.751045 master-0 kubenswrapper[28149]: I0313 12:53:52.748564 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 12:53:52.766075 master-0 kubenswrapper[28149]: I0313 12:53:52.766020 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 12:53:52.787124 master-0 kubenswrapper[28149]: I0313 12:53:52.787063 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 12:53:52.797064 master-0 kubenswrapper[28149]: I0313 12:53:52.796990 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysconfig\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.797064 master-0 kubenswrapper[28149]: I0313 12:53:52.797059 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcf05594-4c10-4b54-a47c-d55e323f1f87-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:53:52.797345 master-0 kubenswrapper[28149]: I0313 12:53:52.797093 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b12a6f33-70df-4832-ac3b-0d2b94125fbf-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:53:52.797345 master-0 kubenswrapper[28149]: I0313 12:53:52.797117 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rkc4\" (UniqueName: \"kubernetes.io/projected/00ebdf06-1f44-40cd-87e5-54195188b6d4-kube-api-access-7rkc4\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:53:52.797345 master-0 kubenswrapper[28149]: I0313 12:53:52.797164 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/915aabfe-1071-4bfc-b291-424304dfe7d8-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:53:52.797345 master-0 kubenswrapper[28149]: I0313 12:53:52.797189 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-run\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.797345 master-0 kubenswrapper[28149]: I0313 12:53:52.797213 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvrc7\" (UniqueName: \"kubernetes.io/projected/d39ee5d7-840e-4481-b0b9-baf34da2c7b1-kube-api-access-rvrc7\") pod \"cluster-samples-operator-664cb58b85-m5499\" (UID: \"d39ee5d7-840e-4481-b0b9-baf34da2c7b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" Mar 13 12:53:52.797345 master-0 kubenswrapper[28149]: I0313 12:53:52.797234 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-config\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:53:52.797345 master-0 kubenswrapper[28149]: I0313 12:53:52.797258 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n75n\" (UniqueName: \"kubernetes.io/projected/f6992fed-b472-4a2d-a376-c5d72aa846d4-kube-api-access-4n75n\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:53:52.797345 master-0 kubenswrapper[28149]: I0313 12:53:52.797280 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-images\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:53:52.797345 master-0 kubenswrapper[28149]: I0313 12:53:52.797305 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-apiservice-cert\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:53:52.797345 master-0 kubenswrapper[28149]: I0313 12:53:52.797326 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da84bb7-e936-49a0-96b5-614a1305d6a4-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:53:52.797345 master-0 kubenswrapper[28149]: I0313 12:53:52.797347 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/866b0545-e232-4c80-9fb6-549d313ac3fc-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-pmzkf\" (UID: \"866b0545-e232-4c80-9fb6-549d313ac3fc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797368 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p9dz\" (UniqueName: \"kubernetes.io/projected/b12a6f33-70df-4832-ac3b-0d2b94125fbf-kube-api-access-9p9dz\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797390 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-service-ca-bundle\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797412 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797436 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4dd0fc2f-f2ee-4447-a747-04a178288cf0-metrics-tls\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797461 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797487 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-multus\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797510 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-etcd-client\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797533 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/00d8a21b-701c-4334-9dda-34c28b417f42-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797557 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-systemd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797580 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-serving-cert\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797613 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-daemon-config\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797635 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-trusted-ca-bundle\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797656 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797678 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz927\" (UniqueName: \"kubernetes.io/projected/081a08d6-a4fd-412c-81c3-1364c36f0f15-kube-api-access-mz927\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797709 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-webhook-cert\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797747 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-env-overrides\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797769 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kxx9\" (UniqueName: \"kubernetes.io/projected/1cf388b6-e4a7-41db-a350-1b503214efd3-kube-api-access-9kxx9\") pod \"certified-operators-p9csk\" (UID: \"1cf388b6-e4a7-41db-a350-1b503214efd3\") " pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797791 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/08e2bc8e-ca80-454c-81dc-211d122e32e0-host-slash\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:53:52.797802 master-0 kubenswrapper[28149]: I0313 12:53:52.797814 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/842251bd-238a-44ba-99fc-a356503f5d16-metrics-client-ca\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.797836 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.797860 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d44112d1-b2a5-4b8d-b74d-1e91638508d5-cert\") pod \"cluster-autoscaler-operator-69576476f7-sqndx\" (UID: \"d44112d1-b2a5-4b8d-b74d-1e91638508d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.797892 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-webhook-cert\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.797915 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50a2046b-092b-434c-92a2-579f4462c4fb-serving-cert\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.797938 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d44112d1-b2a5-4b8d-b74d-1e91638508d5-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-sqndx\" (UID: \"d44112d1-b2a5-4b8d-b74d-1e91638508d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.797962 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-audit-policies\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.797982 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.798004 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-bin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.798028 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.798051 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6h9f\" (UniqueName: \"kubernetes.io/projected/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-kube-api-access-p6h9f\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.798077 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.798100 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/00d8a21b-701c-4334-9dda-34c28b417f42-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.798123 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-systemd-units\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.798164 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-system-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.798187 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-os-release\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.798209 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c0f3e81c-f61d-430a-98e8-82e3b283fc73-signing-key\") pod \"service-ca-84bfdbbb7f-4pksg\" (UID: \"c0f3e81c-f61d-430a-98e8-82e3b283fc73\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.798231 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/08e2bc8e-ca80-454c-81dc-211d122e32e0-iptables-alerter-script\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.798256 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4477be6-bcff-407a-8033-b005e19bf5d6-audit-dir\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.798278 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-kubernetes\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.798302 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btf8q\" (UniqueName: \"kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q\") pod \"network-check-target-pnwsc\" (UID: \"269aedfd-4274-4998-bd0d-603b67257666\") " pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:53:52.798320 master-0 kubenswrapper[28149]: I0313 12:53:52.798326 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mmbc\" (UniqueName: \"kubernetes.io/projected/6a42098e-4633-456f-ace7-bd3ee3bb6707-kube-api-access-7mmbc\") pod \"network-check-source-7c67b67d47-5bb88\" (UID: \"6a42098e-4633-456f-ace7-bd3ee3bb6707\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-5bb88" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798350 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c0f3e81c-f61d-430a-98e8-82e3b283fc73-signing-cabundle\") pod \"service-ca-84bfdbbb7f-4pksg\" (UID: \"c0f3e81c-f61d-430a-98e8-82e3b283fc73\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798373 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-etc-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798395 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkvfp\" (UniqueName: \"kubernetes.io/projected/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-kube-api-access-mkvfp\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798420 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/887d261f-d07f-4ef0-a230-6568f47acf4d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798445 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f48243b-6b05-4efa-8420-58a4419622bf-audit-dir\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798470 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-sys\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798491 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-wtmp\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798515 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798538 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpfv9\" (UniqueName: \"kubernetes.io/projected/13710582-eac3-42e5-b28a-8b4fd3030af2-kube-api-access-vpfv9\") pod \"node-resolver-xpz47\" (UID: \"13710582-eac3-42e5-b28a-8b4fd3030af2\") " pod="openshift-dns/node-resolver-xpz47" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798563 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798588 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-textfile\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798611 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5f63b6b-990a-444b-a954-d718036f2f6c-images\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798632 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-ovnkube-identity-cm\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798654 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-var-lib-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798679 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798705 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x492\" (UniqueName: \"kubernetes.io/projected/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-kube-api-access-6x492\") pod \"redhat-operators-5czx2\" (UID: \"32fe77f9-082d-491c-b3d0-9c10feaf4a8e\") " pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798726 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-systemd\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798748 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-tls\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798773 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798797 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798818 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-kubelet\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798840 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-node-log\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798865 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0da84bb7-e936-49a0-96b5-614a1305d6a4-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798888 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-netns\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798908 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-kubelet\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.798922 master-0 kubenswrapper[28149]: I0313 12:53:52.798931 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/915aabfe-1071-4bfc-b291-424304dfe7d8-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.798962 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw27v\" (UniqueName: \"kubernetes.io/projected/d5f63b6b-990a-444b-a954-d718036f2f6c-kube-api-access-rw27v\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.798987 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034aaf8e-95df-4171-bae4-e7abe58d15f7-serving-cert\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799021 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/747659a6-4a1e-43ed-bb8e-36da6e63b5a1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-btz8w\" (UID: \"747659a6-4a1e-43ed-bb8e-36da6e63b5a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799044 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81f8a7d8-b6a2-4522-91d3-bb524997ed0a-cert\") pod \"ingress-canary-h8skx\" (UID: \"81f8a7d8-b6a2-4522-91d3-bb524997ed0a\") " pod="openshift-ingress-canary/ingress-canary-h8skx" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799065 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cf388b6-e4a7-41db-a350-1b503214efd3-catalog-content\") pod \"certified-operators-p9csk\" (UID: \"1cf388b6-e4a7-41db-a350-1b503214efd3\") " pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799089 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-config\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799113 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c24hd\" (UniqueName: \"kubernetes.io/projected/3020d236-03e0-4916-97dd-f1085632ca43-kube-api-access-c24hd\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799154 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-root\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799178 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-node-bootstrap-token\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799202 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/87a5904a-55ca-416f-8aec-57a2b5194c5a-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799225 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-proxy-ca-bundles\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799254 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c62b15f-001a-4b64-b85f-348aefde5d1b-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799279 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45925a5e-41ae-4c19-b586-3151c7677612-service-ca-bundle\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799300 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-utilities\") pod \"redhat-marketplace-zh888\" (UID: \"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71\") " pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799322 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/36ad5a83-5c32-4941-94e0-7af86ac5d462-webhook-certs\") pod \"multus-admission-controller-7769569c45-qz88j\" (UID: \"36ad5a83-5c32-4941-94e0-7af86ac5d462\") " pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799345 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-sys\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799369 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4hd6\" (UniqueName: \"kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-kube-api-access-j4hd6\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799393 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd6q6\" (UniqueName: \"kubernetes.io/projected/81f8a7d8-b6a2-4522-91d3-bb524997ed0a-kube-api-access-gd6q6\") pod \"ingress-canary-h8skx\" (UID: \"81f8a7d8-b6a2-4522-91d3-bb524997ed0a\") " pod="openshift-ingress-canary/ingress-canary-h8skx" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799415 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-etc-kubernetes\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799437 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/00ebdf06-1f44-40cd-87e5-54195188b6d4-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799462 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7d67915-d31e-46dc-bb2e-1a6f689dd875-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-jhtsp\" (UID: \"d7d67915-d31e-46dc-bb2e-1a6f689dd875\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799485 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef42b65e-2d92-46ac-baaf-30e213787781-config-volume\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799509 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799532 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799554 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4tnq\" (UniqueName: \"kubernetes.io/projected/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-kube-api-access-m4tnq\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799580 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1081e565-b7d8-4b6e-9d41-5db36cfe094c-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799606 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnpds\" (UniqueName: \"kubernetes.io/projected/50a2046b-092b-434c-92a2-579f4462c4fb-kube-api-access-mnpds\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799634 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mddhv\" (UniqueName: \"kubernetes.io/projected/87a5904a-55ca-416f-8aec-57a2b5194c5a-kube-api-access-mddhv\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799659 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:53:52.799662 master-0 kubenswrapper[28149]: I0313 12:53:52.799682 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-multus-certs\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.799707 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q2qc\" (UniqueName: \"kubernetes.io/projected/f5775266-5e58-44ed-81cb-dfe3faf38add-kube-api-access-9q2qc\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.799732 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/00ebdf06-1f44-40cd-87e5-54195188b6d4-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.799774 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cscql\" (UniqueName: \"kubernetes.io/projected/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-kube-api-access-cscql\") pod \"redhat-marketplace-zh888\" (UID: \"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71\") " pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.799798 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-modprobe-d\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.799825 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/604456a0-4997-43bc-87ef-283a002111fe-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.799854 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.799879 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.799904 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.799927 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-config\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.799950 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbtjs\" (UniqueName: \"kubernetes.io/projected/29b6aa89-0416-4595-9deb-10b290521d86-kube-api-access-cbtjs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.799977 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-netns\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.800007 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tll9d\" (UniqueName: \"kubernetes.io/projected/45925a5e-41ae-4c19-b586-3151c7677612-kube-api-access-tll9d\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.800030 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5ae41cff-0949-47f8-aae9-ae133191476d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.800055 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-hostroot\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.800076 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-conf-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.800105 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwjz5\" (UniqueName: \"kubernetes.io/projected/4e279dcc-35e2-4503-babc-978ac208c150-kube-api-access-bwjz5\") pod \"csi-snapshot-controller-operator-5685fbc7d-97wkd\" (UID: \"4e279dcc-35e2-4503-babc-978ac208c150\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.800131 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-image-import-ca\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.800173 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-serving-cert\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.800197 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-encryption-config\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.800444 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-encryption-config\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.800564 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-textfile\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:52.800601 master-0 kubenswrapper[28149]: I0313 12:53:52.800581 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:53:52.801442 master-0 kubenswrapper[28149]: I0313 12:53:52.800993 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-config\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:53:52.801442 master-0 kubenswrapper[28149]: I0313 12:53:52.801109 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/604456a0-4997-43bc-87ef-283a002111fe-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:53:52.801442 master-0 kubenswrapper[28149]: I0313 12:53:52.801299 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da84bb7-e936-49a0-96b5-614a1305d6a4-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:53:52.801442 master-0 kubenswrapper[28149]: I0313 12:53:52.801416 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0da84bb7-e936-49a0-96b5-614a1305d6a4-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:53:52.801671 master-0 kubenswrapper[28149]: I0313 12:53:52.801653 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4dd0fc2f-f2ee-4447-a747-04a178288cf0-metrics-tls\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:53:52.801793 master-0 kubenswrapper[28149]: I0313 12:53:52.801767 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034aaf8e-95df-4171-bae4-e7abe58d15f7-serving-cert\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:53:52.801942 master-0 kubenswrapper[28149]: I0313 12:53:52.801919 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-etcd-client\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.801980 master-0 kubenswrapper[28149]: I0313 12:53:52.801959 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cf388b6-e4a7-41db-a350-1b503214efd3-catalog-content\") pod \"certified-operators-p9csk\" (UID: \"1cf388b6-e4a7-41db-a350-1b503214efd3\") " pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:53:52.802260 master-0 kubenswrapper[28149]: I0313 12:53:52.802218 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-serving-cert\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:53:52.802453 master-0 kubenswrapper[28149]: I0313 12:53:52.802414 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c62b15f-001a-4b64-b85f-348aefde5d1b-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:53:52.802453 master-0 kubenswrapper[28149]: I0313 12:53:52.802434 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-daemon-config\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.802557 master-0 kubenswrapper[28149]: I0313 12:53:52.802541 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-utilities\") pod \"redhat-marketplace-zh888\" (UID: \"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71\") " pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:53:52.802670 master-0 kubenswrapper[28149]: I0313 12:53:52.802656 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:53:52.802916 master-0 kubenswrapper[28149]: I0313 12:53:52.802889 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/604456a0-4997-43bc-87ef-283a002111fe-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:53:52.802998 master-0 kubenswrapper[28149]: I0313 12:53:52.802975 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-trusted-ca-bundle\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.803248 master-0 kubenswrapper[28149]: I0313 12:53:52.803208 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5ae41cff-0949-47f8-aae9-ae133191476d-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:53:52.803323 master-0 kubenswrapper[28149]: I0313 12:53:52.803277 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c0f3e81c-f61d-430a-98e8-82e3b283fc73-signing-key\") pod \"service-ca-84bfdbbb7f-4pksg\" (UID: \"c0f3e81c-f61d-430a-98e8-82e3b283fc73\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:53:52.803482 master-0 kubenswrapper[28149]: I0313 12:53:52.803463 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5ae41cff-0949-47f8-aae9-ae133191476d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:53:52.803632 master-0 kubenswrapper[28149]: I0313 12:53:52.803617 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/08e2bc8e-ca80-454c-81dc-211d122e32e0-iptables-alerter-script\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:53:52.803898 master-0 kubenswrapper[28149]: I0313 12:53:52.803863 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:53:52.805250 master-0 kubenswrapper[28149]: I0313 12:53:52.803133 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29b6aa89-0416-4595-9deb-10b290521d86-metrics-certs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:53:52.806029 master-0 kubenswrapper[28149]: I0313 12:53:52.805972 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.806306 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-env-overrides\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.806300 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-stats-auth\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.806362 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.806395 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-webhook-cert\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.806437 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.806502 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-client-certs\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.806622 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/887d261f-d07f-4ef0-a230-6568f47acf4d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.806645 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-metrics-server-audit-profiles\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.806679 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c0f3e81c-f61d-430a-98e8-82e3b283fc73-signing-cabundle\") pod \"service-ca-84bfdbbb7f-4pksg\" (UID: \"c0f3e81c-f61d-430a-98e8-82e3b283fc73\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.806724 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xstz5\" (UniqueName: \"kubernetes.io/projected/08e2bc8e-ca80-454c-81dc-211d122e32e0-kube-api-access-xstz5\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.806728 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.806714 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f48243b-6b05-4efa-8420-58a4419622bf-serving-cert\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.806784 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-cnibin\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.806872 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-ca\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.806916 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:53:52.807063 master-0 kubenswrapper[28149]: I0313 12:53:52.807067 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807103 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-image-import-ca\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807113 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5f63b6b-990a-444b-a954-d718036f2f6c-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807188 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-ca\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807203 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xbrx\" (UniqueName: \"kubernetes.io/projected/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-api-access-4xbrx\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807250 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807317 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4q4x\" (UniqueName: \"kubernetes.io/projected/c4477be6-bcff-407a-8033-b005e19bf5d6-kube-api-access-d4q4x\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807401 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807482 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f32761-b386-4f93-b3c0-b16ea53d338a-metrics-tls\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807481 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-tuned\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807522 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-ovn\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807541 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87a5904a-55ca-416f-8aec-57a2b5194c5a-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807543 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-tuned\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807564 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5f63b6b-990a-444b-a954-d718036f2f6c-config\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807611 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/676b054a-e76f-425d-a6ff-3f1bea8b523e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807644 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-config\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807654 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f79578c-bbfb-4968-893a-730deb4c01f9-metrics-tls\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807664 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr9gm\" (UniqueName: \"kubernetes.io/projected/4f9e6618-62b5-4181-b545-211461811140-kube-api-access-tr9gm\") pod \"community-operators-9x9vk\" (UID: \"4f9e6618-62b5-4181-b545-211461811140\") " pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807712 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807739 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0803181-4e37-43fa-8ddc-9c76d3f61817-serving-cert\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807865 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-serving-cert\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807876 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-config\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807891 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-proxy-tls\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807920 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/915aabfe-1071-4bfc-b291-424304dfe7d8-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807949 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-server-tls\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.807976 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-config\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808004 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0803181-4e37-43fa-8ddc-9c76d3f61817-available-featuregates\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808030 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808041 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0803181-4e37-43fa-8ddc-9c76d3f61817-serving-cert\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808058 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-etcd-client\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808085 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg8tz\" (UniqueName: \"kubernetes.io/projected/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-kube-api-access-vg8tz\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808111 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808172 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676b054a-e76f-425d-a6ff-3f1bea8b523e-serving-cert\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808201 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808230 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c69h2\" (UniqueName: \"kubernetes.io/projected/fc192c03-5aec-4507-a702-56bf98c96e9c-kube-api-access-c69h2\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808247 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-serving-cert\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808255 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2p67\" (UniqueName: \"kubernetes.io/projected/13f32761-b386-4f93-b3c0-b16ea53d338a-kube-api-access-m2p67\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808340 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-rootfs\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808367 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-catalog-content\") pod \"redhat-operators-5czx2\" (UID: \"32fe77f9-082d-491c-b3d0-9c10feaf4a8e\") " pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808382 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0803181-4e37-43fa-8ddc-9c76d3f61817-available-featuregates\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808407 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808430 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-script-lib\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808448 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a454234a-6c8e-4916-81e8-c9e66cec9d31-serving-cert\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808472 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-config\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808495 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovn-node-metrics-cert\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808517 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/676b054a-e76f-425d-a6ff-3f1bea8b523e-service-ca\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808533 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d39ee5d7-840e-4481-b0b9-baf34da2c7b1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-m5499\" (UID: \"d39ee5d7-840e-4481-b0b9-baf34da2c7b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808555 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808588 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcf05594-4c10-4b54-a47c-d55e323f1f87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808594 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-cnibin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808718 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-catalog-content\") pod \"redhat-operators-5czx2\" (UID: \"32fe77f9-082d-491c-b3d0-9c10feaf4a8e\") " pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808931 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-config\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808973 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sk7j\" (UniqueName: \"kubernetes.io/projected/604456a0-4997-43bc-87ef-283a002111fe-kube-api-access-8sk7j\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.808999 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809024 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809047 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809072 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-catalog-content\") pod \"redhat-marketplace-zh888\" (UID: \"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71\") " pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809097 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4dd0fc2f-f2ee-4447-a747-04a178288cf0-host-etc-kube\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809122 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w5r2\" (UniqueName: \"kubernetes.io/projected/034aaf8e-95df-4171-bae4-e7abe58d15f7-kube-api-access-5w5r2\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809164 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-client-ca-bundle\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809191 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-mcd-auth-proxy-config\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809213 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809213 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-catalog-content\") pod \"redhat-marketplace-zh888\" (UID: \"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71\") " pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809245 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ffa620-dacc-4b09-be04-2c325f860813-serving-cert\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809275 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-bound-sa-token\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809300 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809325 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-var-lib-kubelet\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809351 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/be89c006-0c82-4728-9c79-210303e623dc-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809382 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-utilities\") pod \"redhat-operators-5czx2\" (UID: \"32fe77f9-082d-491c-b3d0-9c10feaf4a8e\") " pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809408 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b726x\" (UniqueName: \"kubernetes.io/projected/1081e565-b7d8-4b6e-9d41-5db36cfe094c-kube-api-access-b726x\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809490 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdxqb\" (UniqueName: \"kubernetes.io/projected/00d8a21b-701c-4334-9dda-34c28b417f42-kube-api-access-bdxqb\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809532 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-tuning-conf-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809553 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-utilities\") pod \"redhat-operators-5czx2\" (UID: \"32fe77f9-082d-491c-b3d0-9c10feaf4a8e\") " pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809560 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-serving-cert\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809586 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809610 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809653 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c62b15f-001a-4b64-b85f-348aefde5d1b-config\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809678 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-cni-binary-copy\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809703 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-certs\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809779 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-binary-copy\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809806 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j5fc\" (UniqueName: \"kubernetes.io/projected/d6226325-c4d9-497e-8d19-a71adc66c5ac-kube-api-access-4j5fc\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809831 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13710582-eac3-42e5-b28a-8b4fd3030af2-hosts-file\") pod \"node-resolver-xpz47\" (UID: \"13710582-eac3-42e5-b28a-8b4fd3030af2\") " pod="openshift-dns/node-resolver-xpz47" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809856 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-metrics-certs\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809916 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c62b15f-001a-4b64-b85f-348aefde5d1b-config\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.809978 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/00ebdf06-1f44-40cd-87e5-54195188b6d4-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810014 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/50a2046b-092b-434c-92a2-579f4462c4fb-snapshots\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810014 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-binary-copy\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810095 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/50a2046b-092b-434c-92a2-579f4462c4fb-snapshots\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810103 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-system-cni-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810157 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlvjp\" (UniqueName: \"kubernetes.io/projected/5ae41cff-0949-47f8-aae9-ae133191476d-kube-api-access-mlvjp\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810188 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-bin\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810215 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5775266-5e58-44ed-81cb-dfe3faf38add-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810268 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/887d261f-d07f-4ef0-a230-6568f47acf4d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810302 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n85n6\" (UniqueName: \"kubernetes.io/projected/915aabfe-1071-4bfc-b291-424304dfe7d8-kube-api-access-n85n6\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810408 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69hws\" (UniqueName: \"kubernetes.io/projected/d7d67915-d31e-46dc-bb2e-1a6f689dd875-kube-api-access-69hws\") pod \"cluster-storage-operator-6fbfc8dc8f-jhtsp\" (UID: \"d7d67915-d31e-46dc-bb2e-1a6f689dd875\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810453 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5775266-5e58-44ed-81cb-dfe3faf38add-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810479 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810513 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnw9d\" (UniqueName: \"kubernetes.io/projected/4dd0fc2f-f2ee-4447-a747-04a178288cf0-kube-api-access-fnw9d\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810539 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e25bef76-7020-4f86-8dee-a58ebed537d2-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810539 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/887d261f-d07f-4ef0-a230-6568f47acf4d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810564 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-config\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810594 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f79578c-bbfb-4968-893a-730deb4c01f9-trusted-ca\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810622 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysctl-conf\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810533 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ce3a655a-0684-4bc5-ac36-5878507537c7-cni-binary-copy\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810651 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5nb7\" (UniqueName: \"kubernetes.io/projected/d3d998ee-b26f-4e30-83bc-f94f8c68060a-kube-api-access-x5nb7\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810715 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdlrq\" (UniqueName: \"kubernetes.io/projected/d44112d1-b2a5-4b8d-b74d-1e91638508d5-kube-api-access-tdlrq\") pod \"cluster-autoscaler-operator-69576476f7-sqndx\" (UID: \"d44112d1-b2a5-4b8d-b74d-1e91638508d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810743 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/fc192c03-5aec-4507-a702-56bf98c96e9c-audit-log\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810769 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-client-ca\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810797 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd4m8\" (UniqueName: \"kubernetes.io/projected/be89c006-0c82-4728-9c79-210303e623dc-kube-api-access-dd4m8\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810940 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/5e4f10ca-6466-4ac0-aeb7-325e40473e04-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810960 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/fc192c03-5aec-4507-a702-56bf98c96e9c-audit-log\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.810974 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811064 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/5e4f10ca-6466-4ac0-aeb7-325e40473e04-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811072 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f79578c-bbfb-4968-893a-730deb4c01f9-trusted-ca\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811064 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-config\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811154 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqsh5\" (UniqueName: \"kubernetes.io/projected/36ad5a83-5c32-4941-94e0-7af86ac5d462-kube-api-access-mqsh5\") pod \"multus-admission-controller-7769569c45-qz88j\" (UID: \"36ad5a83-5c32-4941-94e0-7af86ac5d462\") " pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811183 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811212 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-client-ca\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811213 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-srv-cert\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811305 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-client\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811329 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/915aabfe-1071-4bfc-b291-424304dfe7d8-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811349 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-host\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811365 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-config\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811384 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3020d236-03e0-4916-97dd-f1085632ca43-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811401 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v2jm\" (UniqueName: \"kubernetes.io/projected/842251bd-238a-44ba-99fc-a356503f5d16-kube-api-access-9v2jm\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811419 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km69t\" (UniqueName: \"kubernetes.io/projected/152689b1-5875-4a9a-bb25-bee858523168-kube-api-access-km69t\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811437 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811451 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5a19b80-d488-46d3-a4a8-0b80361077e1-srv-cert\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811518 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/915aabfe-1071-4bfc-b291-424304dfe7d8-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:53:52.811513 master-0 kubenswrapper[28149]: I0313 12:53:52.811455 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cf2v\" (UniqueName: \"kubernetes.io/projected/8c62b15f-001a-4b64-b85f-348aefde5d1b-kube-api-access-8cf2v\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.811546 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-client\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.811558 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbwwp\" (UniqueName: \"kubernetes.io/projected/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-kube-api-access-jbwwp\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.811621 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-default-certificate\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.811651 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v66x\" (UniqueName: \"kubernetes.io/projected/317af639-269e-4163-8e24-fcea468b9352-kube-api-access-4v66x\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.811677 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-env-overrides\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.811770 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-audit\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.811804 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.811813 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-config\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.811840 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/676b054a-e76f-425d-a6ff-3f1bea8b523e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.811875 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxcvd\" (UniqueName: \"kubernetes.io/projected/747659a6-4a1e-43ed-bb8e-36da6e63b5a1-kube-api-access-qxcvd\") pod \"control-plane-machine-set-operator-6686554ddc-btz8w\" (UID: \"747659a6-4a1e-43ed-bb8e-36da6e63b5a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.811936 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-env-overrides\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.811951 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/00ebdf06-1f44-40cd-87e5-54195188b6d4-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812051 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brzd4\" (UniqueName: \"kubernetes.io/projected/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-kube-api-access-brzd4\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812075 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3020d236-03e0-4916-97dd-f1085632ca43-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812085 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-netd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812107 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-images\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812235 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65ts9\" (UniqueName: \"kubernetes.io/projected/c0f3e81c-f61d-430a-98e8-82e3b283fc73-kube-api-access-65ts9\") pod \"service-ca-84bfdbbb7f-4pksg\" (UID: \"c0f3e81c-f61d-430a-98e8-82e3b283fc73\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812233 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-audit\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812255 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef42b65e-2d92-46ac-baaf-30e213787781-metrics-tls\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812277 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812320 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/00ebdf06-1f44-40cd-87e5-54195188b6d4-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812345 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812367 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-whereabouts-configmap\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812398 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clrz7\" (UniqueName: \"kubernetes.io/projected/15b592d6-3c48-45d4-9172-d28632ae8995-kube-api-access-clrz7\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812428 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-socket-dir-parent\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812494 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t2jl\" (UniqueName: \"kubernetes.io/projected/c642c18f-f960-4418-bcb7-df884f8f8ad5-kube-api-access-8t2jl\") pod \"csi-snapshot-controller-7577d6f48-pjpn2\" (UID: \"c642c18f-f960-4418-bcb7-df884f8f8ad5\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812524 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-tmp\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812550 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-serving-cert\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812575 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f9e6618-62b5-4181-b545-211461811140-utilities\") pod \"community-operators-9x9vk\" (UID: \"4f9e6618-62b5-4181-b545-211461811140\") " pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812582 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-whereabouts-configmap\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812587 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/152689b1-5875-4a9a-bb25-bee858523168-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812607 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmfxj\" (UniqueName: \"kubernetes.io/projected/887d261f-d07f-4ef0-a230-6568f47acf4d-kube-api-access-pmfxj\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812706 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f9e6618-62b5-4181-b545-211461811140-utilities\") pod \"community-operators-9x9vk\" (UID: \"4f9e6618-62b5-4181-b545-211461811140\") " pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812749 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-tmp\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812769 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2f48243b-6b05-4efa-8420-58a4419622bf-node-pullsecrets\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812918 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9hks\" (UniqueName: \"kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-kube-api-access-f9hks\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812940 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-ovnkube-identity-cm\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812963 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-serving-cert\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.812954 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676b054a-e76f-425d-a6ff-3f1bea8b523e-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813154 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813191 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813223 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-config\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813265 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbk4f\" (UniqueName: \"kubernetes.io/projected/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-kube-api-access-zbk4f\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813293 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgbvr\" (UniqueName: \"kubernetes.io/projected/ce3a655a-0684-4bc5-ac36-5878507537c7-kube-api-access-vgbvr\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813322 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmzhw\" (UniqueName: \"kubernetes.io/projected/18ffa620-dacc-4b09-be04-2c325f860813-kube-api-access-fmzhw\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813357 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwkdj\" (UniqueName: \"kubernetes.io/projected/f0803181-4e37-43fa-8ddc-9c76d3f61817-kube-api-access-lwkdj\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813385 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813411 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-encryption-config\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813436 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-lib-modules\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813458 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-os-release\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813485 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813512 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x8kz\" (UniqueName: \"kubernetes.io/projected/3d653e1a-5903-4a02-9357-df145f028c0d-kube-api-access-6x8kz\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813538 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-etcd-serving-ca\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813569 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhddd\" (UniqueName: \"kubernetes.io/projected/2f48243b-6b05-4efa-8420-58a4419622bf-kube-api-access-qhddd\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813593 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysctl-d\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813619 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxjbd\" (UniqueName: \"kubernetes.io/projected/ef42b65e-2d92-46ac-baaf-30e213787781-kube-api-access-xxjbd\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813649 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5775266-5e58-44ed-81cb-dfe3faf38add-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813681 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-etcd-serving-ca\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813709 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034aaf8e-95df-4171-bae4-e7abe58d15f7-config\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813776 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e25bef76-7020-4f86-8dee-a58ebed537d2-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813804 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-slash\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813823 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d653e1a-5903-4a02-9357-df145f028c0d-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.813828 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-log-socket\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814039 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-trusted-ca-bundle\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814109 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn8f2\" (UniqueName: \"kubernetes.io/projected/a454234a-6c8e-4916-81e8-c9e66cec9d31-kube-api-access-kn8f2\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814143 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f6992fed-b472-4a2d-a376-c5d72aa846d4-tmpfs\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814163 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-k8s-cni-cncf-io\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814181 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-var-lock\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814201 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/00ebdf06-1f44-40cd-87e5-54195188b6d4-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814224 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwk62\" (UniqueName: \"kubernetes.io/projected/f31565e2-c211-4d28-8bbc-d7a951023a8b-kube-api-access-kwk62\") pod \"migrator-57ccdf9b5-7pcdp\" (UID: \"f31565e2-c211-4d28-8bbc-d7a951023a8b\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-7pcdp" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814247 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814271 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8gcb\" (UniqueName: \"kubernetes.io/projected/e25bef76-7020-4f86-8dee-a58ebed537d2-kube-api-access-r8gcb\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814292 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814313 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f9e6618-62b5-4181-b545-211461811140-catalog-content\") pod \"community-operators-9x9vk\" (UID: \"4f9e6618-62b5-4181-b545-211461811140\") " pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814332 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cf388b6-e4a7-41db-a350-1b503214efd3-utilities\") pod \"certified-operators-p9csk\" (UID: \"1cf388b6-e4a7-41db-a350-1b503214efd3\") " pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814351 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814366 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b592d6-3c48-45d4-9172-d28632ae8995-config\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814473 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8hcd\" (UniqueName: \"kubernetes.io/projected/d5a19b80-d488-46d3-a4a8-0b80361077e1-kube-api-access-p8hcd\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814541 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0da84bb7-e936-49a0-96b5-614a1305d6a4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814549 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cf388b6-e4a7-41db-a350-1b503214efd3-utilities\") pod \"certified-operators-p9csk\" (UID: \"1cf388b6-e4a7-41db-a350-1b503214efd3\") " pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814613 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5775266-5e58-44ed-81cb-dfe3faf38add-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814766 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/3020d236-03e0-4916-97dd-f1085632ca43-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814788 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2f48243b-6b05-4efa-8420-58a4419622bf-etcd-serving-ca\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814882 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f9e6618-62b5-4181-b545-211461811140-catalog-content\") pod \"community-operators-9x9vk\" (UID: \"4f9e6618-62b5-4181-b545-211461811140\") " pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.814952 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f6992fed-b472-4a2d-a376-c5d72aa846d4-tmpfs\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:53:52.819031 master-0 kubenswrapper[28149]: I0313 12:53:52.815031 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034aaf8e-95df-4171-bae4-e7abe58d15f7-config\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:53:52.828349 master-0 kubenswrapper[28149]: I0313 12:53:52.821791 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcf05594-4c10-4b54-a47c-d55e323f1f87-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:53:52.828349 master-0 kubenswrapper[28149]: I0313 12:53:52.826114 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 12:53:52.850797 master-0 kubenswrapper[28149]: I0313 12:53:52.850753 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 12:53:52.865895 master-0 kubenswrapper[28149]: I0313 12:53:52.865745 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 13 12:53:52.867415 master-0 kubenswrapper[28149]: I0313 12:53:52.867369 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/00ebdf06-1f44-40cd-87e5-54195188b6d4-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:53:52.886356 master-0 kubenswrapper[28149]: I0313 12:53:52.886319 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-5ddms" Mar 13 12:53:52.915850 master-0 kubenswrapper[28149]: I0313 12:53:52.915796 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.915850 master-0 kubenswrapper[28149]: I0313 12:53:52.915841 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-multus\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.916067 master-0 kubenswrapper[28149]: I0313 12:53:52.915864 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-systemd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.916067 master-0 kubenswrapper[28149]: I0313 12:53:52.915893 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/08e2bc8e-ca80-454c-81dc-211d122e32e0-host-slash\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:53:52.916067 master-0 kubenswrapper[28149]: I0313 12:53:52.915946 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-bin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.916067 master-0 kubenswrapper[28149]: I0313 12:53:52.915979 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/00d8a21b-701c-4334-9dda-34c28b417f42-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:53:52.916067 master-0 kubenswrapper[28149]: I0313 12:53:52.915998 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-systemd-units\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.916067 master-0 kubenswrapper[28149]: I0313 12:53:52.916014 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-system-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.916067 master-0 kubenswrapper[28149]: I0313 12:53:52.916029 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-os-release\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.916067 master-0 kubenswrapper[28149]: I0313 12:53:52.916046 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4477be6-bcff-407a-8033-b005e19bf5d6-audit-dir\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:52.916307 master-0 kubenswrapper[28149]: I0313 12:53:52.916062 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-kubernetes\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.916307 master-0 kubenswrapper[28149]: I0313 12:53:52.916103 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-etc-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.916307 master-0 kubenswrapper[28149]: I0313 12:53:52.916126 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f48243b-6b05-4efa-8420-58a4419622bf-audit-dir\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.916307 master-0 kubenswrapper[28149]: I0313 12:53:52.916158 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-sys\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:52.916307 master-0 kubenswrapper[28149]: I0313 12:53:52.916174 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-wtmp\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:52.916307 master-0 kubenswrapper[28149]: I0313 12:53:52.916203 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-var-lib-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.916307 master-0 kubenswrapper[28149]: I0313 12:53:52.916226 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-systemd\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.916307 master-0 kubenswrapper[28149]: I0313 12:53:52.916253 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-kubelet\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.916307 master-0 kubenswrapper[28149]: I0313 12:53:52.916270 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-node-log\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.916307 master-0 kubenswrapper[28149]: I0313 12:53:52.916285 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-netns\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.916307 master-0 kubenswrapper[28149]: I0313 12:53:52.916302 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-kubelet\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.916618 master-0 kubenswrapper[28149]: I0313 12:53:52.916320 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/915aabfe-1071-4bfc-b291-424304dfe7d8-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:53:52.916618 master-0 kubenswrapper[28149]: I0313 12:53:52.916363 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-root\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:52.916618 master-0 kubenswrapper[28149]: I0313 12:53:52.916404 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-sys\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.916618 master-0 kubenswrapper[28149]: I0313 12:53:52.916430 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-etc-kubernetes\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.916618 master-0 kubenswrapper[28149]: I0313 12:53:52.916447 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/00ebdf06-1f44-40cd-87e5-54195188b6d4-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:53:52.916618 master-0 kubenswrapper[28149]: I0313 12:53:52.916534 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f48243b-6b05-4efa-8420-58a4419622bf-audit-dir\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.916618 master-0 kubenswrapper[28149]: I0313 12:53:52.916552 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-wtmp\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:52.916618 master-0 kubenswrapper[28149]: I0313 12:53:52.916594 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-kubelet\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.916618 master-0 kubenswrapper[28149]: I0313 12:53:52.916618 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-node-log\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916622 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916659 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-systemd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916663 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-root\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916684 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/915aabfe-1071-4bfc-b291-424304dfe7d8-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916702 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/08e2bc8e-ca80-454c-81dc-211d122e32e0-host-slash\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916711 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-sys\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916727 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-multus\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916733 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-etc-kubernetes\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916754 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/00ebdf06-1f44-40cd-87e5-54195188b6d4-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916765 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-cni-bin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916787 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-kubernetes\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916797 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-etc-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916805 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-netns\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916818 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/842251bd-238a-44ba-99fc-a356503f5d16-sys\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916823 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-multus-certs\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916838 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-multus-certs\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916870 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/00d8a21b-701c-4334-9dda-34c28b417f42-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:53:52.916881 master-0 kubenswrapper[28149]: I0313 12:53:52.916873 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-systemd-units\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.917441 master-0 kubenswrapper[28149]: I0313 12:53:52.916907 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-os-release\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.917441 master-0 kubenswrapper[28149]: I0313 12:53:52.916919 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-system-cni-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.917441 master-0 kubenswrapper[28149]: I0313 12:53:52.916931 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-systemd\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.917441 master-0 kubenswrapper[28149]: I0313 12:53:52.916944 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-var-lib-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.917441 master-0 kubenswrapper[28149]: I0313 12:53:52.916963 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4477be6-bcff-407a-8033-b005e19bf5d6-audit-dir\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:52.917441 master-0 kubenswrapper[28149]: I0313 12:53:52.917014 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-var-lib-kubelet\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.917441 master-0 kubenswrapper[28149]: I0313 12:53:52.917041 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-modprobe-d\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.917441 master-0 kubenswrapper[28149]: I0313 12:53:52.917087 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-netns\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.917441 master-0 kubenswrapper[28149]: I0313 12:53:52.917125 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-hostroot\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.917441 master-0 kubenswrapper[28149]: I0313 12:53:52.917166 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-conf-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.917441 master-0 kubenswrapper[28149]: I0313 12:53:52.917235 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-cnibin\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.917441 master-0 kubenswrapper[28149]: I0313 12:53:52.917257 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:53:52.917441 master-0 kubenswrapper[28149]: I0313 12:53:52.917309 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-ovn\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.917441 master-0 kubenswrapper[28149]: I0313 12:53:52.917350 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/676b054a-e76f-425d-a6ff-3f1bea8b523e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:53:52.917835 master-0 kubenswrapper[28149]: I0313 12:53:52.917451 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.917835 master-0 kubenswrapper[28149]: I0313 12:53:52.917490 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-rootfs\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:53:52.917835 master-0 kubenswrapper[28149]: I0313 12:53:52.917565 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-cnibin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.917835 master-0 kubenswrapper[28149]: I0313 12:53:52.917598 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.917835 master-0 kubenswrapper[28149]: I0313 12:53:52.917630 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:53:52.917835 master-0 kubenswrapper[28149]: I0313 12:53:52.917654 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4dd0fc2f-f2ee-4447-a747-04a178288cf0-host-etc-kube\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:53:52.917835 master-0 kubenswrapper[28149]: I0313 12:53:52.917769 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-var-lib-kubelet\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.917835 master-0 kubenswrapper[28149]: I0313 12:53:52.917821 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-tuning-conf-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.917878 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13710582-eac3-42e5-b28a-8b4fd3030af2-hosts-file\") pod \"node-resolver-xpz47\" (UID: \"13710582-eac3-42e5-b28a-8b4fd3030af2\") " pod="openshift-dns/node-resolver-xpz47" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.917914 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-system-cni-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.917945 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-bin\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.917968 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918020 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysctl-conf\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918083 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-modprobe-d\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918106 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-host\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918119 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-rootfs\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918174 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-cnibin\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918197 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-netns\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918214 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/676b054a-e76f-425d-a6ff-3f1bea8b523e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918221 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-hostroot\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918248 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/676b054a-e76f-425d-a6ff-3f1bea8b523e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918254 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-netd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918278 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-conf-dir\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918277 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-openvswitch\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918304 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-cnibin\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918335 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-run-ovn\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918355 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918377 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/676b054a-e76f-425d-a6ff-3f1bea8b523e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918402 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-host\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918421 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-netd\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918441 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-system-cni-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918445 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-socket-dir-parent\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918468 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4dd0fc2f-f2ee-4447-a747-04a178288cf0-host-etc-kube\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918492 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2f48243b-6b05-4efa-8420-58a4419622bf-node-pullsecrets\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918501 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-tuning-conf-dir\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918529 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13710582-eac3-42e5-b28a-8b4fd3030af2-hosts-file\") pod \"node-resolver-xpz47\" (UID: \"13710582-eac3-42e5-b28a-8b4fd3030af2\") " pod="openshift-dns/node-resolver-xpz47" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918534 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-cni-bin\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918574 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-multus-socket-dir-parent\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918601 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-var-lib-kubelet\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918630 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2f48243b-6b05-4efa-8420-58a4419622bf-node-pullsecrets\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918662 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918668 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysctl-conf\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918717 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-lib-modules\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918733 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-os-release\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918775 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysctl-d\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918790 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-run-ovn-kubernetes\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918804 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-slash\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918823 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-log-socket\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918862 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-k8s-cni-cncf-io\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918851 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/152689b1-5875-4a9a-bb25-bee858523168-os-release\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918881 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-var-lock\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918902 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/00ebdf06-1f44-40cd-87e5-54195188b6d4-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918961 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-lib-modules\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.918982 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysconfig\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.919007 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ce3a655a-0684-4bc5-ac36-5878507537c7-host-run-k8s-cni-cncf-io\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.919022 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/915aabfe-1071-4bfc-b291-424304dfe7d8-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.919046 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-run\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.919067 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-host-slash\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.919182 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/915aabfe-1071-4bfc-b291-424304dfe7d8-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.919182 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/00ebdf06-1f44-40cd-87e5-54195188b6d4-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.919201 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6226325-c4d9-497e-8d19-a71adc66c5ac-log-socket\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.919205 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysconfig\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.919047 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-etc-sysctl-d\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.919217 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-var-lock\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:53:52.919485 master-0 kubenswrapper[28149]: I0313 12:53:52.919244 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-run\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:52.965936 master-0 kubenswrapper[28149]: I0313 12:53:52.964640 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 13 12:53:52.965936 master-0 kubenswrapper[28149]: I0313 12:53:52.964910 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 13 12:53:52.974157 master-0 kubenswrapper[28149]: I0313 12:53:52.968206 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-m9n95" Mar 13 12:53:52.982154 master-0 kubenswrapper[28149]: I0313 12:53:52.979475 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 13 12:53:52.995542 master-0 kubenswrapper[28149]: I0313 12:53:52.995495 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/915aabfe-1071-4bfc-b291-424304dfe7d8-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:53:52.998484 master-0 kubenswrapper[28149]: I0313 12:53:52.996625 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 13 12:53:53.009172 master-0 kubenswrapper[28149]: I0313 12:53:53.008450 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 12:53:53.027210 master-0 kubenswrapper[28149]: I0313 12:53:53.025450 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-76p65" Mar 13 12:53:53.048413 master-0 kubenswrapper[28149]: I0313 12:53:53.047519 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 12:53:53.048413 master-0 kubenswrapper[28149]: I0313 12:53:53.047723 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5f63b6b-990a-444b-a954-d718036f2f6c-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:53:53.066378 master-0 kubenswrapper[28149]: I0313 12:53:53.066338 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 12:53:53.067926 master-0 kubenswrapper[28149]: I0313 12:53:53.067882 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5f63b6b-990a-444b-a954-d718036f2f6c-config\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:53:53.085353 master-0 kubenswrapper[28149]: I0313 12:53:53.085310 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 12:53:53.091162 master-0 kubenswrapper[28149]: I0313 12:53:53.091097 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5f63b6b-990a-444b-a954-d718036f2f6c-images\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:53:53.108128 master-0 kubenswrapper[28149]: I0313 12:53:53.108091 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-check-endpoints/0.log" Mar 13 12:53:53.111218 master-0 kubenswrapper[28149]: I0313 12:53:53.111190 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:53:53.111505 master-0 kubenswrapper[28149]: I0313 12:53:53.111480 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:53:53.117006 master-0 kubenswrapper[28149]: I0313 12:53:53.116571 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 13 12:53:53.120552 master-0 kubenswrapper[28149]: I0313 12:53:53.120519 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/00ebdf06-1f44-40cd-87e5-54195188b6d4-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:53:53.127164 master-0 kubenswrapper[28149]: I0313 12:53:53.126578 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 12:53:53.133348 master-0 kubenswrapper[28149]: I0313 12:53:53.132773 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:53:53.144995 master-0 kubenswrapper[28149]: I0313 12:53:53.144954 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 13 12:53:53.154363 master-0 kubenswrapper[28149]: I0313 12:53:53.154286 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d44112d1-b2a5-4b8d-b74d-1e91638508d5-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-sqndx\" (UID: \"d44112d1-b2a5-4b8d-b74d-1e91638508d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:53:53.165691 master-0 kubenswrapper[28149]: I0313 12:53:53.165643 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-qvxhm" Mar 13 12:53:53.185824 master-0 kubenswrapper[28149]: I0313 12:53:53.185782 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-4jl9c" Mar 13 12:53:53.206252 master-0 kubenswrapper[28149]: I0313 12:53:53.206192 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 13 12:53:53.216899 master-0 kubenswrapper[28149]: I0313 12:53:53.216789 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d44112d1-b2a5-4b8d-b74d-1e91638508d5-cert\") pod \"cluster-autoscaler-operator-69576476f7-sqndx\" (UID: \"d44112d1-b2a5-4b8d-b74d-1e91638508d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:53:53.226533 master-0 kubenswrapper[28149]: I0313 12:53:53.226461 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 12:53:53.231567 master-0 kubenswrapper[28149]: I0313 12:53:53.231520 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-kubelet-dir\") pod \"185a10f7-2a4b-4171-b10d-4614cb8671bd\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " Mar 13 12:53:53.231708 master-0 kubenswrapper[28149]: I0313 12:53:53.231625 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-var-lock\") pod \"185a10f7-2a4b-4171-b10d-4614cb8671bd\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " Mar 13 12:53:53.232350 master-0 kubenswrapper[28149]: I0313 12:53:53.232303 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "185a10f7-2a4b-4171-b10d-4614cb8671bd" (UID: "185a10f7-2a4b-4171-b10d-4614cb8671bd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:53.232521 master-0 kubenswrapper[28149]: I0313 12:53:53.232493 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-var-lock" (OuterVolumeSpecName: "var-lock") pod "185a10f7-2a4b-4171-b10d-4614cb8671bd" (UID: "185a10f7-2a4b-4171-b10d-4614cb8671bd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:53.232767 master-0 kubenswrapper[28149]: I0313 12:53:53.232742 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/747659a6-4a1e-43ed-bb8e-36da6e63b5a1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-btz8w\" (UID: \"747659a6-4a1e-43ed-bb8e-36da6e63b5a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" Mar 13 12:53:53.233186 master-0 kubenswrapper[28149]: I0313 12:53:53.233166 28149 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:53.233222 master-0 kubenswrapper[28149]: I0313 12:53:53.233187 28149 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/185a10f7-2a4b-4171-b10d-4614cb8671bd-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:53.246254 master-0 kubenswrapper[28149]: I0313 12:53:53.246211 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 12:53:53.248222 master-0 kubenswrapper[28149]: I0313 12:53:53.248186 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:53:53.265568 master-0 kubenswrapper[28149]: I0313 12:53:53.265490 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-r2lqd" Mar 13 12:53:53.286079 master-0 kubenswrapper[28149]: I0313 12:53:53.286020 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 12:53:53.294656 master-0 kubenswrapper[28149]: I0313 12:53:53.294606 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-config\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:53:53.305456 master-0 kubenswrapper[28149]: I0313 12:53:53.305393 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:53:53.326611 master-0 kubenswrapper[28149]: I0313 12:53:53.326551 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 12:53:53.351994 master-0 kubenswrapper[28149]: I0313 12:53:53.351935 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 12:53:53.355758 master-0 kubenswrapper[28149]: I0313 12:53:53.355707 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:53:53.366822 master-0 kubenswrapper[28149]: I0313 12:53:53.366761 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 12:53:53.386182 master-0 kubenswrapper[28149]: I0313 12:53:53.386125 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 12:53:53.406097 master-0 kubenswrapper[28149]: I0313 12:53:53.406046 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 12:53:53.409928 master-0 kubenswrapper[28149]: I0313 12:53:53.409887 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676b054a-e76f-425d-a6ff-3f1bea8b523e-serving-cert\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:53:53.428154 master-0 kubenswrapper[28149]: I0313 12:53:53.428094 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 12:53:53.429314 master-0 kubenswrapper[28149]: I0313 12:53:53.429286 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/676b054a-e76f-425d-a6ff-3f1bea8b523e-service-ca\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:53:53.445862 master-0 kubenswrapper[28149]: I0313 12:53:53.445800 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 12:53:53.465849 master-0 kubenswrapper[28149]: I0313 12:53:53.465805 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 12:53:53.473290 master-0 kubenswrapper[28149]: I0313 12:53:53.473190 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef42b65e-2d92-46ac-baaf-30e213787781-metrics-tls\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:53:53.486963 master-0 kubenswrapper[28149]: I0313 12:53:53.486924 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 12:53:53.506615 master-0 kubenswrapper[28149]: I0313 12:53:53.506553 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 12:53:53.513895 master-0 kubenswrapper[28149]: I0313 12:53:53.513809 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef42b65e-2d92-46ac-baaf-30e213787781-config-volume\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:53:53.525570 master-0 kubenswrapper[28149]: I0313 12:53:53.525509 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 12:53:53.527010 master-0 kubenswrapper[28149]: I0313 12:53:53.526976 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45925a5e-41ae-4c19-b586-3151c7677612-service-ca-bundle\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:53:53.545966 master-0 kubenswrapper[28149]: I0313 12:53:53.545926 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 12:53:53.565541 master-0 kubenswrapper[28149]: I0313 12:53:53.565483 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 12:53:53.572958 master-0 kubenswrapper[28149]: I0313 12:53:53.572914 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-default-certificate\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:53:53.585435 master-0 kubenswrapper[28149]: I0313 12:53:53.585386 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 12:53:53.586702 master-0 kubenswrapper[28149]: I0313 12:53:53.586674 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-stats-auth\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:53:53.606075 master-0 kubenswrapper[28149]: I0313 12:53:53.606004 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 12:53:53.611114 master-0 kubenswrapper[28149]: I0313 12:53:53.611059 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45925a5e-41ae-4c19-b586-3151c7677612-metrics-certs\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:53:53.625534 master-0 kubenswrapper[28149]: I0313 12:53:53.625484 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 12:53:53.636003 master-0 kubenswrapper[28149]: I0313 12:53:53.635951 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-encryption-config\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:53.645361 master-0 kubenswrapper[28149]: I0313 12:53:53.645267 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 12:53:53.649415 master-0 kubenswrapper[28149]: I0313 12:53:53.649373 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovn-node-metrics-cert\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:53.665534 master-0 kubenswrapper[28149]: I0313 12:53:53.665476 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-bb7kx" Mar 13 12:53:53.684468 master-0 kubenswrapper[28149]: I0313 12:53:53.684408 28149 request.go:700] Waited for 1.004570187s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Mar 13 12:53:53.686197 master-0 kubenswrapper[28149]: I0313 12:53:53.686166 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 12:53:53.752478 master-0 kubenswrapper[28149]: I0313 12:53:53.752378 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 12:53:53.753773 master-0 kubenswrapper[28149]: I0313 12:53:53.753735 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 12:53:53.753839 master-0 kubenswrapper[28149]: I0313 12:53:53.753735 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 12:53:53.757294 master-0 kubenswrapper[28149]: I0313 12:53:53.757237 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-audit-policies\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:53.759623 master-0 kubenswrapper[28149]: I0313 12:53:53.759579 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-etcd-client\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:53.768766 master-0 kubenswrapper[28149]: I0313 12:53:53.768714 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 12:53:53.775112 master-0 kubenswrapper[28149]: I0313 12:53:53.775078 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-etcd-serving-ca\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:53.785941 master-0 kubenswrapper[28149]: I0313 12:53:53.785898 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 12:53:53.795013 master-0 kubenswrapper[28149]: I0313 12:53:53.794967 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4477be6-bcff-407a-8033-b005e19bf5d6-trusted-ca-bundle\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:53.801902 master-0 kubenswrapper[28149]: E0313 12:53:53.801850 28149 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.801902 master-0 kubenswrapper[28149]: E0313 12:53:53.801884 28149 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802121 master-0 kubenswrapper[28149]: E0313 12:53:53.801924 28149 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.802121 master-0 kubenswrapper[28149]: E0313 12:53:53.801977 28149 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.802121 master-0 kubenswrapper[28149]: E0313 12:53:53.802016 28149 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802121 master-0 kubenswrapper[28149]: E0313 12:53:53.802018 28149 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802121 master-0 kubenswrapper[28149]: E0313 12:53:53.802026 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-custom-resource-state-configmap podName:5e4f10ca-6466-4ac0-aeb7-325e40473e04 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.301961647 +0000 UTC m=+7.955426906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-68b88f8cb5-blvhm" (UID: "5e4f10ca-6466-4ac0-aeb7-325e40473e04") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.802121 master-0 kubenswrapper[28149]: E0313 12:53:53.801898 28149 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802121 master-0 kubenswrapper[28149]: E0313 12:53:53.802078 28149 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802121 master-0 kubenswrapper[28149]: E0313 12:53:53.801873 28149 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.802121 master-0 kubenswrapper[28149]: E0313 12:53:53.802105 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-auth-proxy-config podName:00d8a21b-701c-4334-9dda-34c28b417f42 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.30207672 +0000 UTC m=+7.955541949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" (UID: "00d8a21b-701c-4334-9dda-34c28b417f42") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.802121 master-0 kubenswrapper[28149]: E0313 12:53:53.802007 28149 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802223 28149 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802274 28149 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802273 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-images podName:d47a1118-c12f-4234-8c0f-1a2a47fa8a4f nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.302123751 +0000 UTC m=+7.955589030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-images") pod "machine-config-operator-fdb5c78b5-6g8qj" (UID: "d47a1118-c12f-4234-8c0f-1a2a47fa8a4f") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802276 28149 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802320 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-config podName:b12a6f33-70df-4832-ac3b-0d2b94125fbf nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.302296666 +0000 UTC m=+7.955761825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-config") pod "machine-approver-754bdc9f9d-cwl2p" (UID: "b12a6f33-70df-4832-ac3b-0d2b94125fbf") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802330 28149 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802340 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-node-bootstrap-token podName:081a08d6-a4fd-412c-81c3-1364c36f0f15 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.302328288 +0000 UTC m=+7.955793597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-node-bootstrap-token") pod "machine-config-server-6crtf" (UID: "081a08d6-a4fd-412c-81c3-1364c36f0f15") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802369 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-tls podName:842251bd-238a-44ba-99fc-a356503f5d16 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.302352369 +0000 UTC m=+7.955817648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-tls") pod "node-exporter-v4hdh" (UID: "842251bd-238a-44ba-99fc-a356503f5d16") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802400 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-apiservice-cert podName:f6992fed-b472-4a2d-a376-c5d72aa846d4 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.302378619 +0000 UTC m=+7.955843898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-apiservice-cert") pod "packageserver-5c5f6764b5-96ktp" (UID: "f6992fed-b472-4a2d-a376-c5d72aa846d4") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802419 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/866b0545-e232-4c80-9fb6-549d313ac3fc-tls-certificates podName:866b0545-e232-4c80-9fb6-549d313ac3fc nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.30241027 +0000 UTC m=+7.955875539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/866b0545-e232-4c80-9fb6-549d313ac3fc-tls-certificates") pod "prometheus-operator-admission-webhook-8464df8497-pmzkf" (UID: "866b0545-e232-4c80-9fb6-549d313ac3fc") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802434 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b12a6f33-70df-4832-ac3b-0d2b94125fbf-machine-approver-tls podName:b12a6f33-70df-4832-ac3b-0d2b94125fbf nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.302426731 +0000 UTC m=+7.955892000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/b12a6f33-70df-4832-ac3b-0d2b94125fbf-machine-approver-tls") pod "machine-approver-754bdc9f9d-cwl2p" (UID: "b12a6f33-70df-4832-ac3b-0d2b94125fbf") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802450 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81f8a7d8-b6a2-4522-91d3-bb524997ed0a-cert podName:81f8a7d8-b6a2-4522-91d3-bb524997ed0a nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.302442271 +0000 UTC m=+7.955907560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/81f8a7d8-b6a2-4522-91d3-bb524997ed0a-cert") pod "ingress-canary-h8skx" (UID: "81f8a7d8-b6a2-4522-91d3-bb524997ed0a") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802466 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-service-ca-bundle podName:50a2046b-092b-434c-92a2-579f4462c4fb nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.302458101 +0000 UTC m=+7.955923400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-service-ca-bundle") pod "insights-operator-8f89dfddd-vxk8z" (UID: "50a2046b-092b-434c-92a2-579f4462c4fb") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802482 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00d8a21b-701c-4334-9dda-34c28b417f42-cloud-controller-manager-operator-tls podName:00d8a21b-701c-4334-9dda-34c28b417f42 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.302474152 +0000 UTC m=+7.955939441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/00d8a21b-701c-4334-9dda-34c28b417f42-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" (UID: "00d8a21b-701c-4334-9dda-34c28b417f42") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802503 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87a5904a-55ca-416f-8aec-57a2b5194c5a-cloud-credential-operator-serving-cert podName:87a5904a-55ca-416f-8aec-57a2b5194c5a nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.302492472 +0000 UTC m=+7.955957761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/87a5904a-55ca-416f-8aec-57a2b5194c5a-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-55d85b7b47-rvp8c" (UID: "87a5904a-55ca-416f-8aec-57a2b5194c5a") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.802577 master-0 kubenswrapper[28149]: E0313 12:53:53.802519 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-proxy-ca-bundles podName:a454234a-6c8e-4916-81e8-c9e66cec9d31 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.302511513 +0000 UTC m=+7.955976682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-proxy-ca-bundles") pod "controller-manager-54c79cbfcc-cxhmh" (UID: "a454234a-6c8e-4916-81e8-c9e66cec9d31") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.803203 master-0 kubenswrapper[28149]: E0313 12:53:53.803182 28149 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.803344 master-0 kubenswrapper[28149]: E0313 12:53:53.803329 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36ad5a83-5c32-4941-94e0-7af86ac5d462-webhook-certs podName:36ad5a83-5c32-4941-94e0-7af86ac5d462 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.303315836 +0000 UTC m=+7.956781095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/36ad5a83-5c32-4941-94e0-7af86ac5d462-webhook-certs") pod "multus-admission-controller-7769569c45-qz88j" (UID: "36ad5a83-5c32-4941-94e0-7af86ac5d462") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.803429 master-0 kubenswrapper[28149]: E0313 12:53:53.803212 28149 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.803573 master-0 kubenswrapper[28149]: E0313 12:53:53.803414 28149 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.803645 master-0 kubenswrapper[28149]: E0313 12:53:53.803572 28149 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.803645 master-0 kubenswrapper[28149]: E0313 12:53:53.803432 28149 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.803645 master-0 kubenswrapper[28149]: E0313 12:53:53.803622 28149 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.803645 master-0 kubenswrapper[28149]: E0313 12:53:53.803536 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7d67915-d31e-46dc-bb2e-1a6f689dd875-cluster-storage-operator-serving-cert podName:d7d67915-d31e-46dc-bb2e-1a6f689dd875 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.303523621 +0000 UTC m=+7.956988880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/d7d67915-d31e-46dc-bb2e-1a6f689dd875-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-6fbfc8dc8f-jhtsp" (UID: "d7d67915-d31e-46dc-bb2e-1a6f689dd875") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.803883 master-0 kubenswrapper[28149]: E0313 12:53:53.803662 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics podName:d3d998ee-b26f-4e30-83bc-f94f8c68060a nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.303647175 +0000 UTC m=+7.957112334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7qhr4" (UID: "d3d998ee-b26f-4e30-83bc-f94f8c68060a") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.803883 master-0 kubenswrapper[28149]: E0313 12:53:53.803673 28149 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.803883 master-0 kubenswrapper[28149]: E0313 12:53:53.803676 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-kube-rbac-proxy-config podName:1081e565-b7d8-4b6e-9d41-5db36cfe094c nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.303670176 +0000 UTC m=+7.957135465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-74cc79fd76-clrbz" (UID: "1081e565-b7d8-4b6e-9d41-5db36cfe094c") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.803883 master-0 kubenswrapper[28149]: E0313 12:53:53.803720 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-proxy-tls podName:d47a1118-c12f-4234-8c0f-1a2a47fa8a4f nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.303697886 +0000 UTC m=+7.957163135 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-proxy-tls") pod "machine-config-operator-fdb5c78b5-6g8qj" (UID: "d47a1118-c12f-4234-8c0f-1a2a47fa8a4f") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.803883 master-0 kubenswrapper[28149]: E0313 12:53:53.803751 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-tls podName:be89c006-0c82-4728-9c79-210303e623dc nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.303740399 +0000 UTC m=+7.957205698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-bvmsj" (UID: "be89c006-0c82-4728-9c79-210303e623dc") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.803883 master-0 kubenswrapper[28149]: E0313 12:53:53.803767 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1081e565-b7d8-4b6e-9d41-5db36cfe094c-metrics-client-ca podName:1081e565-b7d8-4b6e-9d41-5db36cfe094c nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.303760169 +0000 UTC m=+7.957225458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/1081e565-b7d8-4b6e-9d41-5db36cfe094c-metrics-client-ca") pod "openshift-state-metrics-74cc79fd76-clrbz" (UID: "1081e565-b7d8-4b6e-9d41-5db36cfe094c") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.804246 master-0 kubenswrapper[28149]: E0313 12:53:53.804222 28149 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.804307 master-0 kubenswrapper[28149]: E0313 12:53:53.804268 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-webhook-cert podName:f6992fed-b472-4a2d-a376-c5d72aa846d4 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.304258593 +0000 UTC m=+7.957723852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-webhook-cert") pod "packageserver-5c5f6764b5-96ktp" (UID: "f6992fed-b472-4a2d-a376-c5d72aa846d4") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.804424 master-0 kubenswrapper[28149]: E0313 12:53:53.804406 28149 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.804544 master-0 kubenswrapper[28149]: E0313 12:53:53.804532 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/842251bd-238a-44ba-99fc-a356503f5d16-metrics-client-ca podName:842251bd-238a-44ba-99fc-a356503f5d16 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.30451942 +0000 UTC m=+7.957984679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/842251bd-238a-44ba-99fc-a356503f5d16-metrics-client-ca") pod "node-exporter-v4hdh" (UID: "842251bd-238a-44ba-99fc-a356503f5d16") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.806249 master-0 kubenswrapper[28149]: E0313 12:53:53.806231 28149 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.806423 master-0 kubenswrapper[28149]: E0313 12:53:53.806410 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-kube-rbac-proxy-config podName:5e4f10ca-6466-4ac0-aeb7-325e40473e04 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.306396884 +0000 UTC m=+7.959862103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-68b88f8cb5-blvhm" (UID: "5e4f10ca-6466-4ac0-aeb7-325e40473e04") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.806518 master-0 kubenswrapper[28149]: E0313 12:53:53.806482 28149 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.806563 master-0 kubenswrapper[28149]: E0313 12:53:53.806542 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-tls podName:5e4f10ca-6466-4ac0-aeb7-325e40473e04 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.306530029 +0000 UTC m=+7.959995278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-tls") pod "kube-state-metrics-68b88f8cb5-blvhm" (UID: "5e4f10ca-6466-4ac0-aeb7-325e40473e04") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.806563 master-0 kubenswrapper[28149]: I0313 12:53:53.806311 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 12:53:53.806668 master-0 kubenswrapper[28149]: E0313 12:53:53.806655 28149 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.806752 master-0 kubenswrapper[28149]: E0313 12:53:53.806731 28149 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.806799 master-0 kubenswrapper[28149]: E0313 12:53:53.806781 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-metrics-server-audit-profiles podName:fc192c03-5aec-4507-a702-56bf98c96e9c nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.306767036 +0000 UTC m=+7.960232265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-metrics-server-audit-profiles") pod "metrics-server-567b9cf7f-cxnj2" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.806851 master-0 kubenswrapper[28149]: E0313 12:53:53.806694 28149 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.806897 master-0 kubenswrapper[28149]: E0313 12:53:53.806880 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50a2046b-092b-434c-92a2-579f4462c4fb-serving-cert podName:50a2046b-092b-434c-92a2-579f4462c4fb nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.306868279 +0000 UTC m=+7.960333528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/50a2046b-092b-434c-92a2-579f4462c4fb-serving-cert") pod "insights-operator-8f89dfddd-vxk8z" (UID: "50a2046b-092b-434c-92a2-579f4462c4fb") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.806897 master-0 kubenswrapper[28149]: E0313 12:53:53.806828 28149 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.806983 master-0 kubenswrapper[28149]: E0313 12:53:53.806925 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-images podName:00d8a21b-701c-4334-9dda-34c28b417f42 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.30691759 +0000 UTC m=+7.960382839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-images") pod "cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" (UID: "00d8a21b-701c-4334-9dda-34c28b417f42") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.807050 master-0 kubenswrapper[28149]: E0313 12:53:53.807038 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-client-certs podName:fc192c03-5aec-4507-a702-56bf98c96e9c nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.307026563 +0000 UTC m=+7.960491742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-client-certs") pod "metrics-server-567b9cf7f-cxnj2" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.807364 master-0 kubenswrapper[28149]: E0313 12:53:53.807341 28149 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.807455 master-0 kubenswrapper[28149]: E0313 12:53:53.807406 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-kube-rbac-proxy-config podName:842251bd-238a-44ba-99fc-a356503f5d16 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.307394663 +0000 UTC m=+7.960859922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-kube-rbac-proxy-config") pod "node-exporter-v4hdh" (UID: "842251bd-238a-44ba-99fc-a356503f5d16") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.807731 master-0 kubenswrapper[28149]: E0313 12:53:53.807712 28149 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.807863 master-0 kubenswrapper[28149]: E0313 12:53:53.807843 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87a5904a-55ca-416f-8aec-57a2b5194c5a-cco-trusted-ca podName:87a5904a-55ca-416f-8aec-57a2b5194c5a nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.307829436 +0000 UTC m=+7.961294655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/87a5904a-55ca-416f-8aec-57a2b5194c5a-cco-trusted-ca") pod "cloud-credential-operator-55d85b7b47-rvp8c" (UID: "87a5904a-55ca-416f-8aec-57a2b5194c5a") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.807952 master-0 kubenswrapper[28149]: E0313 12:53:53.807884 28149 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.808044 master-0 kubenswrapper[28149]: E0313 12:53:53.808033 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cert podName:317af639-269e-4163-8e24-fcea468b9352 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.308023392 +0000 UTC m=+7.961488551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cert") pod "cluster-baremetal-operator-5cdb4c5598-l6jp5" (UID: "317af639-269e-4163-8e24-fcea468b9352") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.808155 master-0 kubenswrapper[28149]: E0313 12:53:53.808126 28149 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.808268 master-0 kubenswrapper[28149]: E0313 12:53:53.808255 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-server-tls podName:fc192c03-5aec-4507-a702-56bf98c96e9c nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.308243768 +0000 UTC m=+7.961708927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-server-tls") pod "metrics-server-567b9cf7f-cxnj2" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.808387 master-0 kubenswrapper[28149]: E0313 12:53:53.808365 28149 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.808449 master-0 kubenswrapper[28149]: E0313 12:53:53.808417 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-config podName:a454234a-6c8e-4916-81e8-c9e66cec9d31 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.308406413 +0000 UTC m=+7.961871572 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-config") pod "controller-manager-54c79cbfcc-cxhmh" (UID: "a454234a-6c8e-4916-81e8-c9e66cec9d31") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.808834 master-0 kubenswrapper[28149]: E0313 12:53:53.808795 28149 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.808882 master-0 kubenswrapper[28149]: E0313 12:53:53.808835 28149 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.808917 master-0 kubenswrapper[28149]: E0313 12:53:53.808878 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cluster-baremetal-operator-tls podName:317af639-269e-4163-8e24-fcea468b9352 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.308856136 +0000 UTC m=+7.962321335 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-l6jp5" (UID: "317af639-269e-4163-8e24-fcea468b9352") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.808917 master-0 kubenswrapper[28149]: E0313 12:53:53.808909 28149 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.808972 master-0 kubenswrapper[28149]: E0313 12:53:53.808882 28149 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.808972 master-0 kubenswrapper[28149]: E0313 12:53:53.808942 28149 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.808972 master-0 kubenswrapper[28149]: E0313 12:53:53.808912 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-trusted-ca-bundle podName:50a2046b-092b-434c-92a2-579f4462c4fb nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.308901257 +0000 UTC m=+7.962366466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-trusted-ca-bundle") pod "insights-operator-8f89dfddd-vxk8z" (UID: "50a2046b-092b-434c-92a2-579f4462c4fb") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.808972 master-0 kubenswrapper[28149]: E0313 12:53:53.808971 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-script-lib podName:d6226325-c4d9-497e-8d19-a71adc66c5ac nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.308957728 +0000 UTC m=+7.962422887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-script-lib") pod "ovnkube-node-h8fwp" (UID: "d6226325-c4d9-497e-8d19-a71adc66c5ac") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.808972 master-0 kubenswrapper[28149]: E0313 12:53:53.808974 28149 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.809205 master-0 kubenswrapper[28149]: E0313 12:53:53.808983 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a454234a-6c8e-4916-81e8-c9e66cec9d31-serving-cert podName:a454234a-6c8e-4916-81e8-c9e66cec9d31 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.308978199 +0000 UTC m=+7.962443358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a454234a-6c8e-4916-81e8-c9e66cec9d31-serving-cert") pod "controller-manager-54c79cbfcc-cxhmh" (UID: "a454234a-6c8e-4916-81e8-c9e66cec9d31") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.809205 master-0 kubenswrapper[28149]: E0313 12:53:53.808998 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-proxy-tls podName:50be3c2b-284b-4f60-b4ed-2cc7b4e528fa nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.308991389 +0000 UTC m=+7.962456548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-proxy-tls") pod "machine-config-daemon-5h8rc" (UID: "50be3c2b-284b-4f60-b4ed-2cc7b4e528fa") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.809205 master-0 kubenswrapper[28149]: E0313 12:53:53.809010 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-configmap-kubelet-serving-ca-bundle podName:fc192c03-5aec-4507-a702-56bf98c96e9c nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.30900478 +0000 UTC m=+7.962469939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-configmap-kubelet-serving-ca-bundle") pod "metrics-server-567b9cf7f-cxnj2" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.809407 master-0 kubenswrapper[28149]: E0313 12:53:53.809382 28149 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.809407 master-0 kubenswrapper[28149]: E0313 12:53:53.809389 28149 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.809493 master-0 kubenswrapper[28149]: E0313 12:53:53.809409 28149 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.809493 master-0 kubenswrapper[28149]: E0313 12:53:53.809394 28149 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.809493 master-0 kubenswrapper[28149]: E0313 12:53:53.809436 28149 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.809493 master-0 kubenswrapper[28149]: E0313 12:53:53.809416 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-auth-proxy-config podName:b12a6f33-70df-4832-ac3b-0d2b94125fbf nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.309408932 +0000 UTC m=+7.962874091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-auth-proxy-config") pod "machine-approver-754bdc9f9d-cwl2p" (UID: "b12a6f33-70df-4832-ac3b-0d2b94125fbf") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.809493 master-0 kubenswrapper[28149]: E0313 12:53:53.809487 28149 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.809692 master-0 kubenswrapper[28149]: E0313 12:53:53.809505 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d39ee5d7-840e-4481-b0b9-baf34da2c7b1-samples-operator-tls podName:d39ee5d7-840e-4481-b0b9-baf34da2c7b1 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.309493625 +0000 UTC m=+7.962958844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d39ee5d7-840e-4481-b0b9-baf34da2c7b1-samples-operator-tls") pod "cluster-samples-operator-664cb58b85-m5499" (UID: "d39ee5d7-840e-4481-b0b9-baf34da2c7b1") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.809692 master-0 kubenswrapper[28149]: E0313 12:53:53.809435 28149 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-a1r15je3eljsi: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.809692 master-0 kubenswrapper[28149]: E0313 12:53:53.809527 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-mcd-auth-proxy-config podName:50be3c2b-284b-4f60-b4ed-2cc7b4e528fa nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.309520235 +0000 UTC m=+7.962985464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-mcd-auth-proxy-config") pod "machine-config-daemon-5h8rc" (UID: "50be3c2b-284b-4f60-b4ed-2cc7b4e528fa") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.809692 master-0 kubenswrapper[28149]: E0313 12:53:53.809555 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18ffa620-dacc-4b09-be04-2c325f860813-serving-cert podName:18ffa620-dacc-4b09-be04-2c325f860813 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.309549826 +0000 UTC m=+7.963014985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/18ffa620-dacc-4b09-be04-2c325f860813-serving-cert") pod "route-controller-manager-68c48d4f7d-k7drw" (UID: "18ffa620-dacc-4b09-be04-2c325f860813") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.809692 master-0 kubenswrapper[28149]: E0313 12:53:53.809582 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-client-ca-bundle podName:fc192c03-5aec-4507-a702-56bf98c96e9c nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.309569117 +0000 UTC m=+7.963034386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-client-ca-bundle") pod "metrics-server-567b9cf7f-cxnj2" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.809692 master-0 kubenswrapper[28149]: E0313 12:53:53.809605 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-metrics-client-ca podName:5e4f10ca-6466-4ac0-aeb7-325e40473e04 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.309593857 +0000 UTC m=+7.963059016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-metrics-client-ca") pod "kube-state-metrics-68b88f8cb5-blvhm" (UID: "5e4f10ca-6466-4ac0-aeb7-325e40473e04") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.809692 master-0 kubenswrapper[28149]: E0313 12:53:53.809621 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/be89c006-0c82-4728-9c79-210303e623dc-metrics-client-ca podName:be89c006-0c82-4728-9c79-210303e623dc nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.309614008 +0000 UTC m=+7.963079257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/be89c006-0c82-4728-9c79-210303e623dc-metrics-client-ca") pod "prometheus-operator-5ff8674d55-bvmsj" (UID: "be89c006-0c82-4728-9c79-210303e623dc") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.809960 master-0 kubenswrapper[28149]: I0313 12:53:53.809871 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4477be6-bcff-407a-8033-b005e19bf5d6-serving-cert\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:53.809960 master-0 kubenswrapper[28149]: E0313 12:53:53.809944 28149 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.810035 master-0 kubenswrapper[28149]: E0313 12:53:53.809994 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-auth-proxy-config podName:d47a1118-c12f-4234-8c0f-1a2a47fa8a4f nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.309982428 +0000 UTC m=+7.963447667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-auth-proxy-config") pod "machine-config-operator-fdb5c78b5-6g8qj" (UID: "d47a1118-c12f-4234-8c0f-1a2a47fa8a4f") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.810976 master-0 kubenswrapper[28149]: E0313 12:53:53.810938 28149 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.810976 master-0 kubenswrapper[28149]: E0313 12:53:53.810974 28149 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.811292 master-0 kubenswrapper[28149]: E0313 12:53:53.810989 28149 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.811292 master-0 kubenswrapper[28149]: E0313 12:53:53.810997 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-config podName:317af639-269e-4163-8e24-fcea468b9352 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.310986668 +0000 UTC m=+7.964451817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-config") pod "cluster-baremetal-operator-5cdb4c5598-l6jp5" (UID: "317af639-269e-4163-8e24-fcea468b9352") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.811292 master-0 kubenswrapper[28149]: E0313 12:53:53.811233 28149 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.811292 master-0 kubenswrapper[28149]: E0313 12:53:53.811238 28149 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.811292 master-0 kubenswrapper[28149]: E0313 12:53:53.811261 28149 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.811292 master-0 kubenswrapper[28149]: E0313 12:53:53.811245 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-client-ca podName:18ffa620-dacc-4b09-be04-2c325f860813 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.311229444 +0000 UTC m=+7.964694603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-client-ca") pod "route-controller-manager-68c48d4f7d-k7drw" (UID: "18ffa620-dacc-4b09-be04-2c325f860813") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.811554 master-0 kubenswrapper[28149]: E0313 12:53:53.811332 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e25bef76-7020-4f86-8dee-a58ebed537d2-mcc-auth-proxy-config podName:e25bef76-7020-4f86-8dee-a58ebed537d2 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.311307767 +0000 UTC m=+7.964772936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/e25bef76-7020-4f86-8dee-a58ebed537d2-mcc-auth-proxy-config") pod "machine-config-controller-ff46b7bdf-kmnlv" (UID: "e25bef76-7020-4f86-8dee-a58ebed537d2") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.811554 master-0 kubenswrapper[28149]: E0313 12:53:53.811359 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-certs podName:081a08d6-a4fd-412c-81c3-1364c36f0f15 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.311347978 +0000 UTC m=+7.964813247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-certs") pod "machine-config-server-6crtf" (UID: "081a08d6-a4fd-412c-81c3-1364c36f0f15") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.811554 master-0 kubenswrapper[28149]: E0313 12:53:53.811389 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-tls podName:1081e565-b7d8-4b6e-9d41-5db36cfe094c nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.311380889 +0000 UTC m=+7.964846148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-tls") pod "openshift-state-metrics-74cc79fd76-clrbz" (UID: "1081e565-b7d8-4b6e-9d41-5db36cfe094c") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.811554 master-0 kubenswrapper[28149]: E0313 12:53:53.811408 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-config podName:18ffa620-dacc-4b09-be04-2c325f860813 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.311400029 +0000 UTC m=+7.964865308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-config") pod "route-controller-manager-68c48d4f7d-k7drw" (UID: "18ffa620-dacc-4b09-be04-2c325f860813") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.811554 master-0 kubenswrapper[28149]: E0313 12:53:53.811427 28149 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.811554 master-0 kubenswrapper[28149]: E0313 12:53:53.811466 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-client-ca podName:a454234a-6c8e-4916-81e8-c9e66cec9d31 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.311456961 +0000 UTC m=+7.964922220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-client-ca") pod "controller-manager-54c79cbfcc-cxhmh" (UID: "a454234a-6c8e-4916-81e8-c9e66cec9d31") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.813798 master-0 kubenswrapper[28149]: E0313 12:53:53.813736 28149 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.813874 master-0 kubenswrapper[28149]: E0313 12:53:53.813863 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-images podName:317af639-269e-4163-8e24-fcea468b9352 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.31384364 +0000 UTC m=+7.967308799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-images") pod "cluster-baremetal-operator-5cdb4c5598-l6jp5" (UID: "317af639-269e-4163-8e24-fcea468b9352") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:53.815746 master-0 kubenswrapper[28149]: E0313 12:53:53.815505 28149 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.815746 master-0 kubenswrapper[28149]: E0313 12:53:53.815547 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25bef76-7020-4f86-8dee-a58ebed537d2-proxy-tls podName:e25bef76-7020-4f86-8dee-a58ebed537d2 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.315538999 +0000 UTC m=+7.969004158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e25bef76-7020-4f86-8dee-a58ebed537d2-proxy-tls") pod "machine-config-controller-ff46b7bdf-kmnlv" (UID: "e25bef76-7020-4f86-8dee-a58ebed537d2") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.815746 master-0 kubenswrapper[28149]: E0313 12:53:53.815549 28149 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.815746 master-0 kubenswrapper[28149]: E0313 12:53:53.815613 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-kube-rbac-proxy-config podName:be89c006-0c82-4728-9c79-210303e623dc nodeName:}" failed. No retries permitted until 2026-03-13 12:53:54.31559965 +0000 UTC m=+7.969064809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-5ff8674d55-bvmsj" (UID: "be89c006-0c82-4728-9c79-210303e623dc") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:53.825682 master-0 kubenswrapper[28149]: I0313 12:53:53.825619 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 13 12:53:53.846715 master-0 kubenswrapper[28149]: I0313 12:53:53.846654 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-m2z2f" Mar 13 12:53:53.865703 master-0 kubenswrapper[28149]: I0313 12:53:53.865642 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 12:53:53.885924 master-0 kubenswrapper[28149]: I0313 12:53:53.885854 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-59mr8" Mar 13 12:53:53.905955 master-0 kubenswrapper[28149]: I0313 12:53:53.905902 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 12:53:53.926161 master-0 kubenswrapper[28149]: I0313 12:53:53.926100 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 12:53:53.945406 master-0 kubenswrapper[28149]: I0313 12:53:53.945333 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-58w8f" Mar 13 12:53:53.965687 master-0 kubenswrapper[28149]: I0313 12:53:53.965637 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 12:53:53.986381 master-0 kubenswrapper[28149]: I0313 12:53:53.986324 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 12:53:54.005098 master-0 kubenswrapper[28149]: I0313 12:53:54.004988 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 12:53:54.026015 master-0 kubenswrapper[28149]: I0313 12:53:54.025956 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 13 12:53:54.045673 master-0 kubenswrapper[28149]: I0313 12:53:54.045612 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 12:53:54.066191 master-0 kubenswrapper[28149]: I0313 12:53:54.066120 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-99fzl" Mar 13 12:53:54.085970 master-0 kubenswrapper[28149]: I0313 12:53:54.085896 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 13 12:53:54.105916 master-0 kubenswrapper[28149]: I0313 12:53:54.105875 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 13 12:53:54.119061 master-0 kubenswrapper[28149]: I0313 12:53:54.118987 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:53:54.126569 master-0 kubenswrapper[28149]: I0313 12:53:54.126505 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 13 12:53:54.146825 master-0 kubenswrapper[28149]: I0313 12:53:54.146778 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 12:53:54.166115 master-0 kubenswrapper[28149]: I0313 12:53:54.166068 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-fs9mz" Mar 13 12:53:54.191623 master-0 kubenswrapper[28149]: I0313 12:53:54.191574 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 13 12:53:54.205157 master-0 kubenswrapper[28149]: I0313 12:53:54.205090 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 13 12:53:54.225068 master-0 kubenswrapper[28149]: I0313 12:53:54.224994 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 13 12:53:54.246896 master-0 kubenswrapper[28149]: I0313 12:53:54.246845 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 13 12:53:54.264970 master-0 kubenswrapper[28149]: I0313 12:53:54.264875 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 12:53:54.286069 master-0 kubenswrapper[28149]: I0313 12:53:54.286008 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-8mlcv" Mar 13 12:53:54.305766 master-0 kubenswrapper[28149]: I0313 12:53:54.305711 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 12:53:54.326128 master-0 kubenswrapper[28149]: I0313 12:53:54.326072 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 12:53:54.345722 master-0 kubenswrapper[28149]: I0313 12:53:54.345668 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 12:53:54.365040 master-0 kubenswrapper[28149]: I0313 12:53:54.364958 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-node-bootstrap-token\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:53:54.365040 master-0 kubenswrapper[28149]: I0313 12:53:54.365038 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/87a5904a-55ca-416f-8aec-57a2b5194c5a-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:53:54.365263 master-0 kubenswrapper[28149]: I0313 12:53:54.365176 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-proxy-ca-bundles\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:54.365441 master-0 kubenswrapper[28149]: I0313 12:53:54.365399 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/36ad5a83-5c32-4941-94e0-7af86ac5d462-webhook-certs\") pod \"multus-admission-controller-7769569c45-qz88j\" (UID: \"36ad5a83-5c32-4941-94e0-7af86ac5d462\") " pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" Mar 13 12:53:54.365485 master-0 kubenswrapper[28149]: I0313 12:53:54.365463 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:53:54.365529 master-0 kubenswrapper[28149]: I0313 12:53:54.365506 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7d67915-d31e-46dc-bb2e-1a6f689dd875-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-jhtsp\" (UID: \"d7d67915-d31e-46dc-bb2e-1a6f689dd875\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" Mar 13 12:53:54.365577 master-0 kubenswrapper[28149]: I0313 12:53:54.365545 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/87a5904a-55ca-416f-8aec-57a2b5194c5a-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:53:54.365609 master-0 kubenswrapper[28149]: I0313 12:53:54.365561 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1081e565-b7d8-4b6e-9d41-5db36cfe094c-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:53:54.365688 master-0 kubenswrapper[28149]: I0313 12:53:54.365665 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:53:54.365760 master-0 kubenswrapper[28149]: I0313 12:53:54.365742 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:54.365811 master-0 kubenswrapper[28149]: I0313 12:53:54.365766 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-client-certs\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:54.365882 master-0 kubenswrapper[28149]: I0313 12:53:54.365862 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7d67915-d31e-46dc-bb2e-1a6f689dd875-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-jhtsp\" (UID: \"d7d67915-d31e-46dc-bb2e-1a6f689dd875\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" Mar 13 12:53:54.365975 master-0 kubenswrapper[28149]: I0313 12:53:54.365947 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-metrics-server-audit-profiles\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:54.366030 master-0 kubenswrapper[28149]: I0313 12:53:54.366016 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:54.366082 master-0 kubenswrapper[28149]: I0313 12:53:54.366064 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87a5904a-55ca-416f-8aec-57a2b5194c5a-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:53:54.366169 master-0 kubenswrapper[28149]: I0313 12:53:54.366093 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3d998ee-b26f-4e30-83bc-f94f8c68060a-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:53:54.366210 master-0 kubenswrapper[28149]: I0313 12:53:54.366195 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:53:54.366252 master-0 kubenswrapper[28149]: I0313 12:53:54.366233 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-proxy-tls\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:53:54.366292 master-0 kubenswrapper[28149]: I0313 12:53:54.366259 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-server-tls\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:54.366292 master-0 kubenswrapper[28149]: I0313 12:53:54.366280 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-config\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:54.366353 master-0 kubenswrapper[28149]: I0313 12:53:54.366315 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:53:54.366394 master-0 kubenswrapper[28149]: I0313 12:53:54.366358 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:53:54.366394 master-0 kubenswrapper[28149]: I0313 12:53:54.366378 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-script-lib\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:54.366481 master-0 kubenswrapper[28149]: I0313 12:53:54.366416 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:53:54.366481 master-0 kubenswrapper[28149]: I0313 12:53:54.366430 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a454234a-6c8e-4916-81e8-c9e66cec9d31-serving-cert\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:54.366561 master-0 kubenswrapper[28149]: I0313 12:53:54.366476 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d39ee5d7-840e-4481-b0b9-baf34da2c7b1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-m5499\" (UID: \"d39ee5d7-840e-4481-b0b9-baf34da2c7b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" Mar 13 12:53:54.366561 master-0 kubenswrapper[28149]: I0313 12:53:54.366514 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:54.366651 master-0 kubenswrapper[28149]: I0313 12:53:54.366557 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-proxy-tls\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:53:54.366651 master-0 kubenswrapper[28149]: I0313 12:53:54.366628 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87a5904a-55ca-416f-8aec-57a2b5194c5a-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:53:54.366651 master-0 kubenswrapper[28149]: I0313 12:53:54.366642 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/317af639-269e-4163-8e24-fcea468b9352-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:53:54.366804 master-0 kubenswrapper[28149]: I0313 12:53:54.366637 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:54.366804 master-0 kubenswrapper[28149]: I0313 12:53:54.366690 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6226325-c4d9-497e-8d19-a71adc66c5ac-ovnkube-script-lib\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:54.366804 master-0 kubenswrapper[28149]: I0313 12:53:54.366708 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-client-ca-bundle\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:54.366804 master-0 kubenswrapper[28149]: I0313 12:53:54.366729 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d39ee5d7-840e-4481-b0b9-baf34da2c7b1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-m5499\" (UID: \"d39ee5d7-840e-4481-b0b9-baf34da2c7b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" Mar 13 12:53:54.366804 master-0 kubenswrapper[28149]: I0313 12:53:54.366732 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-mcd-auth-proxy-config\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:53:54.366804 master-0 kubenswrapper[28149]: I0313 12:53:54.366766 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:53:54.366804 master-0 kubenswrapper[28149]: I0313 12:53:54.366800 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ffa620-dacc-4b09-be04-2c325f860813-serving-cert\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:53:54.367109 master-0 kubenswrapper[28149]: I0313 12:53:54.366823 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/be89c006-0c82-4728-9c79-210303e623dc-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:53:54.367109 master-0 kubenswrapper[28149]: I0313 12:53:54.366855 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:53:54.367109 master-0 kubenswrapper[28149]: I0313 12:53:54.366877 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-mcd-auth-proxy-config\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:53:54.367109 master-0 kubenswrapper[28149]: I0313 12:53:54.367002 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-certs\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:53:54.367109 master-0 kubenswrapper[28149]: I0313 12:53:54.367041 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-6hdw2" Mar 13 12:53:54.367109 master-0 kubenswrapper[28149]: I0313 12:53:54.367065 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e25bef76-7020-4f86-8dee-a58ebed537d2-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:53:54.367109 master-0 kubenswrapper[28149]: I0313 12:53:54.367087 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-config\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:53:54.367109 master-0 kubenswrapper[28149]: I0313 12:53:54.367093 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:53:54.367482 master-0 kubenswrapper[28149]: I0313 12:53:54.367370 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e25bef76-7020-4f86-8dee-a58ebed537d2-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:53:54.367482 master-0 kubenswrapper[28149]: I0313 12:53:54.367388 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:53:54.367482 master-0 kubenswrapper[28149]: I0313 12:53:54.367412 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-config\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:53:54.367482 master-0 kubenswrapper[28149]: I0313 12:53:54.367453 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-client-ca\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:53:54.367603 master-0 kubenswrapper[28149]: I0313 12:53:54.367500 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-config\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:53:54.367603 master-0 kubenswrapper[28149]: I0313 12:53:54.367544 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-client-ca\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:54.367735 master-0 kubenswrapper[28149]: I0313 12:53:54.367671 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-images\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:53:54.367897 master-0 kubenswrapper[28149]: I0313 12:53:54.367869 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e25bef76-7020-4f86-8dee-a58ebed537d2-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:53:54.367943 master-0 kubenswrapper[28149]: I0313 12:53:54.367870 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/317af639-269e-4163-8e24-fcea468b9352-images\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:53:54.367943 master-0 kubenswrapper[28149]: I0313 12:53:54.367914 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:53:54.368001 master-0 kubenswrapper[28149]: I0313 12:53:54.367975 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b12a6f33-70df-4832-ac3b-0d2b94125fbf-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:53:54.368045 master-0 kubenswrapper[28149]: I0313 12:53:54.368016 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/866b0545-e232-4c80-9fb6-549d313ac3fc-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-pmzkf\" (UID: \"866b0545-e232-4c80-9fb6-549d313ac3fc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf" Mar 13 12:53:54.368045 master-0 kubenswrapper[28149]: I0313 12:53:54.368038 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-images\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:53:54.368125 master-0 kubenswrapper[28149]: I0313 12:53:54.368101 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-apiservice-cert\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:53:54.368125 master-0 kubenswrapper[28149]: I0313 12:53:54.368120 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-service-ca-bundle\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:53:54.368235 master-0 kubenswrapper[28149]: I0313 12:53:54.368167 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:54.368235 master-0 kubenswrapper[28149]: I0313 12:53:54.368228 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/00d8a21b-701c-4334-9dda-34c28b417f42-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:53:54.368291 master-0 kubenswrapper[28149]: I0313 12:53:54.368260 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-webhook-cert\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:53:54.368291 master-0 kubenswrapper[28149]: I0313 12:53:54.368267 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-images\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:53:54.368291 master-0 kubenswrapper[28149]: I0313 12:53:54.368278 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/842251bd-238a-44ba-99fc-a356503f5d16-metrics-client-ca\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:54.368401 master-0 kubenswrapper[28149]: I0313 12:53:54.368297 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:54.368401 master-0 kubenswrapper[28149]: I0313 12:53:54.368317 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:53:54.368401 master-0 kubenswrapper[28149]: I0313 12:53:54.368377 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50a2046b-092b-434c-92a2-579f4462c4fb-serving-cert\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:53:54.368517 master-0 kubenswrapper[28149]: I0313 12:53:54.368405 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:53:54.368517 master-0 kubenswrapper[28149]: I0313 12:53:54.368423 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-apiservice-cert\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:53:54.368624 master-0 kubenswrapper[28149]: I0313 12:53:54.368513 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6992fed-b472-4a2d-a376-c5d72aa846d4-webhook-cert\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:53:54.368624 master-0 kubenswrapper[28149]: I0313 12:53:54.368549 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:53:54.368710 master-0 kubenswrapper[28149]: I0313 12:53:54.368653 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:53:54.368752 master-0 kubenswrapper[28149]: I0313 12:53:54.368714 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:53:54.368752 master-0 kubenswrapper[28149]: I0313 12:53:54.368746 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-tls\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:54.368834 master-0 kubenswrapper[28149]: I0313 12:53:54.368778 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81f8a7d8-b6a2-4522-91d3-bb524997ed0a-cert\") pod \"ingress-canary-h8skx\" (UID: \"81f8a7d8-b6a2-4522-91d3-bb524997ed0a\") " pod="openshift-ingress-canary/ingress-canary-h8skx" Mar 13 12:53:54.368877 master-0 kubenswrapper[28149]: I0313 12:53:54.368866 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-config\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:53:54.384970 master-0 kubenswrapper[28149]: I0313 12:53:54.384922 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-lkhsh" Mar 13 12:53:54.406002 master-0 kubenswrapper[28149]: I0313 12:53:54.405931 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 13 12:53:54.408632 master-0 kubenswrapper[28149]: I0313 12:53:54.408586 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/866b0545-e232-4c80-9fb6-549d313ac3fc-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-pmzkf\" (UID: \"866b0545-e232-4c80-9fb6-549d313ac3fc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf" Mar 13 12:53:54.426649 master-0 kubenswrapper[28149]: I0313 12:53:54.426580 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 12:53:54.429017 master-0 kubenswrapper[28149]: I0313 12:53:54.428939 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/00d8a21b-701c-4334-9dda-34c28b417f42-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:53:54.447369 master-0 kubenswrapper[28149]: I0313 12:53:54.447318 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 12:53:54.449591 master-0 kubenswrapper[28149]: I0313 12:53:54.449546 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:53:54.465785 master-0 kubenswrapper[28149]: I0313 12:53:54.465723 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-zpmf6" Mar 13 12:53:54.486178 master-0 kubenswrapper[28149]: I0313 12:53:54.486108 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:53:54.506288 master-0 kubenswrapper[28149]: I0313 12:53:54.506102 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:53:54.526226 master-0 kubenswrapper[28149]: I0313 12:53:54.526119 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 12:53:54.528850 master-0 kubenswrapper[28149]: I0313 12:53:54.528787 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b12a6f33-70df-4832-ac3b-0d2b94125fbf-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:53:54.571546 master-0 kubenswrapper[28149]: I0313 12:53:54.571493 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 12:53:54.571830 master-0 kubenswrapper[28149]: I0313 12:53:54.571798 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-vj5mr" Mar 13 12:53:54.572162 master-0 kubenswrapper[28149]: I0313 12:53:54.572117 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00d8a21b-701c-4334-9dda-34c28b417f42-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:53:54.585206 master-0 kubenswrapper[28149]: I0313 12:53:54.585156 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 12:53:54.587428 master-0 kubenswrapper[28149]: I0313 12:53:54.587388 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:53:54.605734 master-0 kubenswrapper[28149]: I0313 12:53:54.605691 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 12:53:54.609329 master-0 kubenswrapper[28149]: I0313 12:53:54.609302 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b12a6f33-70df-4832-ac3b-0d2b94125fbf-config\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:53:54.625340 master-0 kubenswrapper[28149]: I0313 12:53:54.625287 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 12:53:54.645866 master-0 kubenswrapper[28149]: I0313 12:53:54.645824 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 12:53:54.665577 master-0 kubenswrapper[28149]: I0313 12:53:54.665518 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 12:53:54.685812 master-0 kubenswrapper[28149]: I0313 12:53:54.685770 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 12:53:54.688547 master-0 kubenswrapper[28149]: I0313 12:53:54.688507 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e25bef76-7020-4f86-8dee-a58ebed537d2-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:53:54.704669 master-0 kubenswrapper[28149]: I0313 12:53:54.704609 28149 request.go:700] Waited for 2.017531287s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-lpcnm&limit=500&resourceVersion=0 Mar 13 12:53:54.705875 master-0 kubenswrapper[28149]: I0313 12:53:54.705838 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-lpcnm" Mar 13 12:53:54.726349 master-0 kubenswrapper[28149]: I0313 12:53:54.726288 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 12:53:54.729724 master-0 kubenswrapper[28149]: I0313 12:53:54.729672 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81f8a7d8-b6a2-4522-91d3-bb524997ed0a-cert\") pod \"ingress-canary-h8skx\" (UID: \"81f8a7d8-b6a2-4522-91d3-bb524997ed0a\") " pod="openshift-ingress-canary/ingress-canary-h8skx" Mar 13 12:53:54.745612 master-0 kubenswrapper[28149]: I0313 12:53:54.745559 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-lz982" Mar 13 12:53:54.766062 master-0 kubenswrapper[28149]: I0313 12:53:54.766019 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 12:53:54.787216 master-0 kubenswrapper[28149]: I0313 12:53:54.787106 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 12:53:54.787551 master-0 kubenswrapper[28149]: I0313 12:53:54.787522 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-certs\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:53:54.806650 master-0 kubenswrapper[28149]: I0313 12:53:54.806605 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-tqljd" Mar 13 12:53:54.826496 master-0 kubenswrapper[28149]: I0313 12:53:54.826450 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 12:53:54.836341 master-0 kubenswrapper[28149]: I0313 12:53:54.836299 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/081a08d6-a4fd-412c-81c3-1364c36f0f15-node-bootstrap-token\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:53:54.851637 master-0 kubenswrapper[28149]: I0313 12:53:54.851581 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 13 12:53:54.857853 master-0 kubenswrapper[28149]: I0313 12:53:54.857812 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:53:54.865454 master-0 kubenswrapper[28149]: I0313 12:53:54.865426 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-tw2tq" Mar 13 12:53:54.886610 master-0 kubenswrapper[28149]: I0313 12:53:54.886560 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 13 12:53:54.889716 master-0 kubenswrapper[28149]: I0313 12:53:54.889676 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50a2046b-092b-434c-92a2-579f4462c4fb-serving-cert\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:53:54.905760 master-0 kubenswrapper[28149]: I0313 12:53:54.905719 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 13 12:53:54.926290 master-0 kubenswrapper[28149]: I0313 12:53:54.926226 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 13 12:53:54.929621 master-0 kubenswrapper[28149]: I0313 12:53:54.929578 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a2046b-092b-434c-92a2-579f4462c4fb-service-ca-bundle\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:53:54.953384 master-0 kubenswrapper[28149]: I0313 12:53:54.953339 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 13 12:53:54.985437 master-0 kubenswrapper[28149]: I0313 12:53:54.985384 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 13 12:53:54.988015 master-0 kubenswrapper[28149]: I0313 12:53:54.987972 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:53:55.006838 master-0 kubenswrapper[28149]: I0313 12:53:55.006769 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 13 12:53:55.009287 master-0 kubenswrapper[28149]: I0313 12:53:55.009233 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1081e565-b7d8-4b6e-9d41-5db36cfe094c-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:53:55.026352 master-0 kubenswrapper[28149]: I0313 12:53:55.026285 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-zhlhv" Mar 13 12:53:55.049154 master-0 kubenswrapper[28149]: I0313 12:53:55.049017 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 13 12:53:55.056348 master-0 kubenswrapper[28149]: I0313 12:53:55.056304 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:55.065343 master-0 kubenswrapper[28149]: I0313 12:53:55.065289 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 13 12:53:55.069296 master-0 kubenswrapper[28149]: I0313 12:53:55.069251 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:55.086126 master-0 kubenswrapper[28149]: I0313 12:53:55.086072 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 13 12:53:55.087445 master-0 kubenswrapper[28149]: I0313 12:53:55.087410 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/be89c006-0c82-4728-9c79-210303e623dc-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:53:55.087669 master-0 kubenswrapper[28149]: I0313 12:53:55.087621 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:55.088759 master-0 kubenswrapper[28149]: I0313 12:53:55.088728 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/842251bd-238a-44ba-99fc-a356503f5d16-metrics-client-ca\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:55.096538 master-0 kubenswrapper[28149]: I0313 12:53:55.096495 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1081e565-b7d8-4b6e-9d41-5db36cfe094c-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:53:55.106738 master-0 kubenswrapper[28149]: I0313 12:53:55.106696 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 13 12:53:55.108316 master-0 kubenswrapper[28149]: I0313 12:53:55.108283 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:53:55.132345 master-0 kubenswrapper[28149]: I0313 12:53:55.132296 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-gsftw" Mar 13 12:53:55.138560 master-0 kubenswrapper[28149]: I0313 12:53:55.138503 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/5.log" Mar 13 12:53:55.139242 master-0 kubenswrapper[28149]: I0313 12:53:55.139200 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/4.log" Mar 13 12:53:55.139859 master-0 kubenswrapper[28149]: I0313 12:53:55.139814 28149 generic.go:334] "Generic (PLEG): container finished" podID="2f79578c-bbfb-4968-893a-730deb4c01f9" containerID="062296caf4aa99e0b771a3fc7c5b24a99b64a55a1235aefba1f6f98aec258e8a" exitCode=1 Mar 13 12:53:55.145947 master-0 kubenswrapper[28149]: I0313 12:53:55.145909 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 13 12:53:55.156847 master-0 kubenswrapper[28149]: I0313 12:53:55.156803 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/be89c006-0c82-4728-9c79-210303e623dc-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:53:55.165230 master-0 kubenswrapper[28149]: I0313 12:53:55.165185 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:53:55.186056 master-0 kubenswrapper[28149]: I0313 12:53:55.185972 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:53:55.187302 master-0 kubenswrapper[28149]: I0313 12:53:55.187272 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a454234a-6c8e-4916-81e8-c9e66cec9d31-serving-cert\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:55.205781 master-0 kubenswrapper[28149]: I0313 12:53:55.205719 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:53:55.208990 master-0 kubenswrapper[28149]: I0313 12:53:55.208942 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-client-ca\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:55.232025 master-0 kubenswrapper[28149]: I0313 12:53:55.231968 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:53:55.236743 master-0 kubenswrapper[28149]: I0313 12:53:55.236698 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-proxy-ca-bundles\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:55.245432 master-0 kubenswrapper[28149]: I0313 12:53:55.245390 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-grrfm" Mar 13 12:53:55.266022 master-0 kubenswrapper[28149]: I0313 12:53:55.265978 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:53:55.285981 master-0 kubenswrapper[28149]: I0313 12:53:55.285916 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:53:55.287130 master-0 kubenswrapper[28149]: I0313 12:53:55.287086 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-config\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:55.333217 master-0 kubenswrapper[28149]: I0313 12:53:55.333103 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 12:53:55.333217 master-0 kubenswrapper[28149]: I0313 12:53:55.333190 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 12:53:55.337499 master-0 kubenswrapper[28149]: I0313 12:53:55.337450 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ffa620-dacc-4b09-be04-2c325f860813-serving-cert\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:53:55.338712 master-0 kubenswrapper[28149]: I0313 12:53:55.338672 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-client-ca\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:53:55.345525 master-0 kubenswrapper[28149]: I0313 12:53:55.345478 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 12:53:55.348869 master-0 kubenswrapper[28149]: I0313 12:53:55.348829 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-config\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:53:55.365938 master-0 kubenswrapper[28149]: E0313 12:53:55.365903 28149 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:55.366097 master-0 kubenswrapper[28149]: E0313 12:53:55.365920 28149 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:55.366097 master-0 kubenswrapper[28149]: E0313 12:53:55.365994 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-client-certs podName:fc192c03-5aec-4507-a702-56bf98c96e9c nodeName:}" failed. No retries permitted until 2026-03-13 12:53:56.365971312 +0000 UTC m=+10.019436471 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-client-certs") pod "metrics-server-567b9cf7f-cxnj2" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:55.366097 master-0 kubenswrapper[28149]: E0313 12:53:55.366030 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36ad5a83-5c32-4941-94e0-7af86ac5d462-webhook-certs podName:36ad5a83-5c32-4941-94e0-7af86ac5d462 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:56.366006533 +0000 UTC m=+10.019471692 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/36ad5a83-5c32-4941-94e0-7af86ac5d462-webhook-certs") pod "multus-admission-controller-7769569c45-qz88j" (UID: "36ad5a83-5c32-4941-94e0-7af86ac5d462") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:55.366295 master-0 kubenswrapper[28149]: E0313 12:53:55.366277 28149 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:55.366327 master-0 kubenswrapper[28149]: E0313 12:53:55.366317 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-kube-rbac-proxy-config podName:842251bd-238a-44ba-99fc-a356503f5d16 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:56.366306333 +0000 UTC m=+10.019771562 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-kube-rbac-proxy-config") pod "node-exporter-v4hdh" (UID: "842251bd-238a-44ba-99fc-a356503f5d16") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:55.366435 master-0 kubenswrapper[28149]: E0313 12:53:55.366411 28149 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:55.366478 master-0 kubenswrapper[28149]: E0313 12:53:55.366434 28149 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:55.366478 master-0 kubenswrapper[28149]: E0313 12:53:55.366448 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-server-tls podName:fc192c03-5aec-4507-a702-56bf98c96e9c nodeName:}" failed. No retries permitted until 2026-03-13 12:53:56.366440727 +0000 UTC m=+10.019905886 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-server-tls") pod "metrics-server-567b9cf7f-cxnj2" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:55.366478 master-0 kubenswrapper[28149]: E0313 12:53:55.366466 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-metrics-server-audit-profiles podName:fc192c03-5aec-4507-a702-56bf98c96e9c nodeName:}" failed. No retries permitted until 2026-03-13 12:53:56.366457977 +0000 UTC m=+10.019923136 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-metrics-server-audit-profiles") pod "metrics-server-567b9cf7f-cxnj2" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:55.366721 master-0 kubenswrapper[28149]: E0313 12:53:55.366689 28149 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:55.366783 master-0 kubenswrapper[28149]: E0313 12:53:55.366768 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-configmap-kubelet-serving-ca-bundle podName:fc192c03-5aec-4507-a702-56bf98c96e9c nodeName:}" failed. No retries permitted until 2026-03-13 12:53:56.366732095 +0000 UTC m=+10.020197244 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-configmap-kubelet-serving-ca-bundle") pod "metrics-server-567b9cf7f-cxnj2" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:55.366830 master-0 kubenswrapper[28149]: E0313 12:53:55.366803 28149 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-a1r15je3eljsi: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:55.366830 master-0 kubenswrapper[28149]: I0313 12:53:55.366802 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 12:53:55.366887 master-0 kubenswrapper[28149]: E0313 12:53:55.366848 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-client-ca-bundle podName:fc192c03-5aec-4507-a702-56bf98c96e9c nodeName:}" failed. No retries permitted until 2026-03-13 12:53:56.366838318 +0000 UTC m=+10.020303547 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-client-ca-bundle") pod "metrics-server-567b9cf7f-cxnj2" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:55.369301 master-0 kubenswrapper[28149]: E0313 12:53:55.369271 28149 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:55.369360 master-0 kubenswrapper[28149]: E0313 12:53:55.369323 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-tls podName:842251bd-238a-44ba-99fc-a356503f5d16 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:56.369312029 +0000 UTC m=+10.022777188 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-tls") pod "node-exporter-v4hdh" (UID: "842251bd-238a-44ba-99fc-a356503f5d16") : failed to sync secret cache: timed out waiting for the condition Mar 13 12:53:55.369414 master-0 kubenswrapper[28149]: E0313 12:53:55.369372 28149 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:55.369479 master-0 kubenswrapper[28149]: E0313 12:53:55.369460 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-custom-resource-state-configmap podName:5e4f10ca-6466-4ac0-aeb7-325e40473e04 nodeName:}" failed. No retries permitted until 2026-03-13 12:53:56.369427813 +0000 UTC m=+10.022892982 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-68b88f8cb5-blvhm" (UID: "5e4f10ca-6466-4ac0-aeb7-325e40473e04") : failed to sync configmap cache: timed out waiting for the condition Mar 13 12:53:55.385863 master-0 kubenswrapper[28149]: I0313 12:53:55.385814 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-sk2p7" Mar 13 12:53:55.406296 master-0 kubenswrapper[28149]: I0313 12:53:55.406246 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 12:53:55.426476 master-0 kubenswrapper[28149]: I0313 12:53:55.426216 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 13 12:53:55.446799 master-0 kubenswrapper[28149]: I0313 12:53:55.446716 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 13 12:53:55.466048 master-0 kubenswrapper[28149]: I0313 12:53:55.465989 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-a1r15je3eljsi" Mar 13 12:53:55.486037 master-0 kubenswrapper[28149]: I0313 12:53:55.485972 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-lt6vb" Mar 13 12:53:55.505985 master-0 kubenswrapper[28149]: I0313 12:53:55.505923 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 13 12:53:55.525750 master-0 kubenswrapper[28149]: I0313 12:53:55.525691 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 13 12:53:55.545915 master-0 kubenswrapper[28149]: I0313 12:53:55.545848 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 13 12:53:55.565781 master-0 kubenswrapper[28149]: I0313 12:53:55.565718 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-7ntw6" Mar 13 12:53:55.585889 master-0 kubenswrapper[28149]: I0313 12:53:55.585773 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 13 12:53:55.607013 master-0 kubenswrapper[28149]: I0313 12:53:55.606939 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 13 12:53:55.626731 master-0 kubenswrapper[28149]: I0313 12:53:55.626631 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 12:53:55.646081 master-0 kubenswrapper[28149]: I0313 12:53:55.646033 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-b9zx6" Mar 13 12:53:55.666107 master-0 kubenswrapper[28149]: I0313 12:53:55.666062 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-g2ksc" Mar 13 12:53:55.699908 master-0 kubenswrapper[28149]: I0313 12:53:55.699740 28149 kubelet_pods.go:1320] "Clean up containers for orphaned pod we had not seen before" podUID="5f77c8e18b751d90bc0dfe2d4e304050" killPodOptions="" Mar 13 12:53:55.700273 master-0 kubenswrapper[28149]: E0313 12:53:55.700247 28149 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.005s" Mar 13 12:53:55.704666 master-0 kubenswrapper[28149]: I0313 12:53:55.704627 28149 request.go:700] Waited for 2.903784439s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators/token Mar 13 12:53:55.710769 master-0 kubenswrapper[28149]: I0313 12:53:55.710686 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f77c8e18b751d90bc0dfe2d4e304050" path="/var/lib/kubelet/pods/5f77c8e18b751d90bc0dfe2d4e304050/volumes" Mar 13 12:53:55.711228 master-0 kubenswrapper[28149]: I0313 12:53:55.711167 28149 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 13 12:53:55.721130 master-0 kubenswrapper[28149]: I0313 12:53:55.721094 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x492\" (UniqueName: \"kubernetes.io/projected/32fe77f9-082d-491c-b3d0-9c10feaf4a8e-kube-api-access-6x492\") pod \"redhat-operators-5czx2\" (UID: \"32fe77f9-082d-491c-b3d0-9c10feaf4a8e\") " pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:53:55.737346 master-0 kubenswrapper[28149]: I0313 12:53:55.737281 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rkc4\" (UniqueName: \"kubernetes.io/projected/00ebdf06-1f44-40cd-87e5-54195188b6d4-kube-api-access-7rkc4\") pod \"catalogd-controller-manager-7f8b8b6f4c-8fjzg\" (UID: \"00ebdf06-1f44-40cd-87e5-54195188b6d4\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:53:55.757657 master-0 kubenswrapper[28149]: I0313 12:53:55.757608 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvrc7\" (UniqueName: \"kubernetes.io/projected/d39ee5d7-840e-4481-b0b9-baf34da2c7b1-kube-api-access-rvrc7\") pod \"cluster-samples-operator-664cb58b85-m5499\" (UID: \"d39ee5d7-840e-4481-b0b9-baf34da2c7b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-m5499" Mar 13 12:53:55.778357 master-0 kubenswrapper[28149]: I0313 12:53:55.778299 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n75n\" (UniqueName: \"kubernetes.io/projected/f6992fed-b472-4a2d-a376-c5d72aa846d4-kube-api-access-4n75n\") pod \"packageserver-5c5f6764b5-96ktp\" (UID: \"f6992fed-b472-4a2d-a376-c5d72aa846d4\") " pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:53:55.797414 master-0 kubenswrapper[28149]: I0313 12:53:55.797358 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p9dz\" (UniqueName: \"kubernetes.io/projected/b12a6f33-70df-4832-ac3b-0d2b94125fbf-kube-api-access-9p9dz\") pod \"machine-approver-754bdc9f9d-cwl2p\" (UID: \"b12a6f33-70df-4832-ac3b-0d2b94125fbf\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cwl2p" Mar 13 12:53:55.819592 master-0 kubenswrapper[28149]: I0313 12:53:55.819526 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw27v\" (UniqueName: \"kubernetes.io/projected/d5f63b6b-990a-444b-a954-d718036f2f6c-kube-api-access-rw27v\") pod \"machine-api-operator-84bf6db4f9-mjxcz\" (UID: \"d5f63b6b-990a-444b-a954-d718036f2f6c\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-mjxcz" Mar 13 12:53:55.837075 master-0 kubenswrapper[28149]: I0313 12:53:55.836946 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c24hd\" (UniqueName: \"kubernetes.io/projected/3020d236-03e0-4916-97dd-f1085632ca43-kube-api-access-c24hd\") pod \"cluster-node-tuning-operator-66c7586884-cz8pc\" (UID: \"3020d236-03e0-4916-97dd-f1085632ca43\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-cz8pc" Mar 13 12:53:55.858033 master-0 kubenswrapper[28149]: I0313 12:53:55.857989 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4hd6\" (UniqueName: \"kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-kube-api-access-j4hd6\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:53:55.878014 master-0 kubenswrapper[28149]: I0313 12:53:55.877948 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd6q6\" (UniqueName: \"kubernetes.io/projected/81f8a7d8-b6a2-4522-91d3-bb524997ed0a-kube-api-access-gd6q6\") pod \"ingress-canary-h8skx\" (UID: \"81f8a7d8-b6a2-4522-91d3-bb524997ed0a\") " pod="openshift-ingress-canary/ingress-canary-h8skx" Mar 13 12:53:55.899969 master-0 kubenswrapper[28149]: I0313 12:53:55.899904 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4tnq\" (UniqueName: \"kubernetes.io/projected/d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a-kube-api-access-m4tnq\") pod \"authentication-operator-7c6989d6c4-tc4ht\" (UID: \"d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" Mar 13 12:53:55.935470 master-0 kubenswrapper[28149]: I0313 12:53:55.935391 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q2qc\" (UniqueName: \"kubernetes.io/projected/f5775266-5e58-44ed-81cb-dfe3faf38add-kube-api-access-9q2qc\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hrm82\" (UID: \"f5775266-5e58-44ed-81cb-dfe3faf38add\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hrm82" Mar 13 12:53:55.938977 master-0 kubenswrapper[28149]: I0313 12:53:55.938914 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kxx9\" (UniqueName: \"kubernetes.io/projected/1cf388b6-e4a7-41db-a350-1b503214efd3-kube-api-access-9kxx9\") pod \"certified-operators-p9csk\" (UID: \"1cf388b6-e4a7-41db-a350-1b503214efd3\") " pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:53:55.957874 master-0 kubenswrapper[28149]: I0313 12:53:55.957803 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tll9d\" (UniqueName: \"kubernetes.io/projected/45925a5e-41ae-4c19-b586-3151c7677612-kube-api-access-tll9d\") pod \"router-default-79f8cd6fdd-wtf6j\" (UID: \"45925a5e-41ae-4c19-b586-3151c7677612\") " pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:53:55.976596 master-0 kubenswrapper[28149]: I0313 12:53:55.976539 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mddhv\" (UniqueName: \"kubernetes.io/projected/87a5904a-55ca-416f-8aec-57a2b5194c5a-kube-api-access-mddhv\") pod \"cloud-credential-operator-55d85b7b47-rvp8c\" (UID: \"87a5904a-55ca-416f-8aec-57a2b5194c5a\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-rvp8c" Mar 13 12:53:55.996855 master-0 kubenswrapper[28149]: I0313 12:53:55.996792 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnpds\" (UniqueName: \"kubernetes.io/projected/50a2046b-092b-434c-92a2-579f4462c4fb-kube-api-access-mnpds\") pod \"insights-operator-8f89dfddd-vxk8z\" (UID: \"50a2046b-092b-434c-92a2-579f4462c4fb\") " pod="openshift-insights/insights-operator-8f89dfddd-vxk8z" Mar 13 12:53:56.017600 master-0 kubenswrapper[28149]: I0313 12:53:56.017527 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btf8q\" (UniqueName: \"kubernetes.io/projected/269aedfd-4274-4998-bd0d-603b67257666-kube-api-access-btf8q\") pod \"network-check-target-pnwsc\" (UID: \"269aedfd-4274-4998-bd0d-603b67257666\") " pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:53:56.048897 master-0 kubenswrapper[28149]: I0313 12:53:56.048774 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwjz5\" (UniqueName: \"kubernetes.io/projected/4e279dcc-35e2-4503-babc-978ac208c150-kube-api-access-bwjz5\") pod \"csi-snapshot-controller-operator-5685fbc7d-97wkd\" (UID: \"4e279dcc-35e2-4503-babc-978ac208c150\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-97wkd" Mar 13 12:53:56.057935 master-0 kubenswrapper[28149]: I0313 12:53:56.057878 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz927\" (UniqueName: \"kubernetes.io/projected/081a08d6-a4fd-412c-81c3-1364c36f0f15-kube-api-access-mz927\") pod \"machine-config-server-6crtf\" (UID: \"081a08d6-a4fd-412c-81c3-1364c36f0f15\") " pod="openshift-machine-config-operator/machine-config-server-6crtf" Mar 13 12:53:56.078488 master-0 kubenswrapper[28149]: I0313 12:53:56.078431 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mmbc\" (UniqueName: \"kubernetes.io/projected/6a42098e-4633-456f-ace7-bd3ee3bb6707-kube-api-access-7mmbc\") pod \"network-check-source-7c67b67d47-5bb88\" (UID: \"6a42098e-4633-456f-ace7-bd3ee3bb6707\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-5bb88" Mar 13 12:53:56.097252 master-0 kubenswrapper[28149]: I0313 12:53:56.097111 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpfv9\" (UniqueName: \"kubernetes.io/projected/13710582-eac3-42e5-b28a-8b4fd3030af2-kube-api-access-vpfv9\") pod \"node-resolver-xpz47\" (UID: \"13710582-eac3-42e5-b28a-8b4fd3030af2\") " pod="openshift-dns/node-resolver-xpz47" Mar 13 12:53:56.120490 master-0 kubenswrapper[28149]: I0313 12:53:56.120420 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6h9f\" (UniqueName: \"kubernetes.io/projected/f83e0d3e-1f73-4727-8ee3-375cbb9e36f8-kube-api-access-p6h9f\") pod \"tuned-6tlzf\" (UID: \"f83e0d3e-1f73-4727-8ee3-375cbb9e36f8\") " pod="openshift-cluster-node-tuning-operator/tuned-6tlzf" Mar 13 12:53:56.139024 master-0 kubenswrapper[28149]: I0313 12:53:56.138940 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cscql\" (UniqueName: \"kubernetes.io/projected/e0ce4c51-2b9f-410f-93e5-9c2ff718dd71-kube-api-access-cscql\") pod \"redhat-marketplace-zh888\" (UID: \"e0ce4c51-2b9f-410f-93e5-9c2ff718dd71\") " pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:53:56.158366 master-0 kubenswrapper[28149]: I0313 12:53:56.158299 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbtjs\" (UniqueName: \"kubernetes.io/projected/29b6aa89-0416-4595-9deb-10b290521d86-kube-api-access-cbtjs\") pod \"network-metrics-daemon-r9lmb\" (UID: \"29b6aa89-0416-4595-9deb-10b290521d86\") " pod="openshift-multus/network-metrics-daemon-r9lmb" Mar 13 12:53:56.178483 master-0 kubenswrapper[28149]: I0313 12:53:56.178419 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkvfp\" (UniqueName: \"kubernetes.io/projected/d47a1118-c12f-4234-8c0f-1a2a47fa8a4f-kube-api-access-mkvfp\") pod \"machine-config-operator-fdb5c78b5-6g8qj\" (UID: \"d47a1118-c12f-4234-8c0f-1a2a47fa8a4f\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-6g8qj" Mar 13 12:53:56.197567 master-0 kubenswrapper[28149]: I0313 12:53:56.197523 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xstz5\" (UniqueName: \"kubernetes.io/projected/08e2bc8e-ca80-454c-81dc-211d122e32e0-kube-api-access-xstz5\") pod \"iptables-alerter-qz6pg\" (UID: \"08e2bc8e-ca80-454c-81dc-211d122e32e0\") " pod="openshift-network-operator/iptables-alerter-qz6pg" Mar 13 12:53:56.218605 master-0 kubenswrapper[28149]: I0313 12:53:56.218558 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xbrx\" (UniqueName: \"kubernetes.io/projected/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-api-access-4xbrx\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:56.239726 master-0 kubenswrapper[28149]: I0313 12:53:56.239667 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4q4x\" (UniqueName: \"kubernetes.io/projected/c4477be6-bcff-407a-8033-b005e19bf5d6-kube-api-access-d4q4x\") pod \"apiserver-787dbf5bb9-5645n\" (UID: \"c4477be6-bcff-407a-8033-b005e19bf5d6\") " pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:56.257160 master-0 kubenswrapper[28149]: I0313 12:53:56.257089 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr9gm\" (UniqueName: \"kubernetes.io/projected/4f9e6618-62b5-4181-b545-211461811140-kube-api-access-tr9gm\") pod \"community-operators-9x9vk\" (UID: \"4f9e6618-62b5-4181-b545-211461811140\") " pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:53:56.277794 master-0 kubenswrapper[28149]: I0313 12:53:56.277746 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2p67\" (UniqueName: \"kubernetes.io/projected/13f32761-b386-4f93-b3c0-b16ea53d338a-kube-api-access-m2p67\") pod \"dns-operator-589895fbb7-mmwk7\" (UID: \"13f32761-b386-4f93-b3c0-b16ea53d338a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-mmwk7" Mar 13 12:53:56.298073 master-0 kubenswrapper[28149]: I0313 12:53:56.297998 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg8tz\" (UniqueName: \"kubernetes.io/projected/089cfabc-9d3d-4260-bb16-8b5eaf73b3fa-kube-api-access-vg8tz\") pod \"openshift-apiserver-operator-799b6db4d7-xchrj\" (UID: \"089cfabc-9d3d-4260-bb16-8b5eaf73b3fa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-xchrj" Mar 13 12:53:56.316870 master-0 kubenswrapper[28149]: I0313 12:53:56.316789 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c69h2\" (UniqueName: \"kubernetes.io/projected/fc192c03-5aec-4507-a702-56bf98c96e9c-kube-api-access-c69h2\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:56.337336 master-0 kubenswrapper[28149]: I0313 12:53:56.337249 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sk7j\" (UniqueName: \"kubernetes.io/projected/604456a0-4997-43bc-87ef-283a002111fe-kube-api-access-8sk7j\") pod \"cluster-monitoring-operator-674cbfbd9d-zwtdz\" (UID: \"604456a0-4997-43bc-87ef-283a002111fe\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-zwtdz" Mar 13 12:53:56.357658 master-0 kubenswrapper[28149]: I0313 12:53:56.357542 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w5r2\" (UniqueName: \"kubernetes.io/projected/034aaf8e-95df-4171-bae4-e7abe58d15f7-kube-api-access-5w5r2\") pod \"service-ca-operator-69b6fc6b88-vmscz\" (UID: \"034aaf8e-95df-4171-bae4-e7abe58d15f7\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vmscz" Mar 13 12:53:56.377357 master-0 kubenswrapper[28149]: I0313 12:53:56.377283 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-bound-sa-token\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:53:56.403974 master-0 kubenswrapper[28149]: I0313 12:53:56.403899 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:56.403974 master-0 kubenswrapper[28149]: I0313 12:53:56.403968 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-tls\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:56.404345 master-0 kubenswrapper[28149]: I0313 12:53:56.403996 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/36ad5a83-5c32-4941-94e0-7af86ac5d462-webhook-certs\") pod \"multus-admission-controller-7769569c45-qz88j\" (UID: \"36ad5a83-5c32-4941-94e0-7af86ac5d462\") " pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" Mar 13 12:53:56.404345 master-0 kubenswrapper[28149]: I0313 12:53:56.404062 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-client-certs\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:56.404345 master-0 kubenswrapper[28149]: I0313 12:53:56.404087 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-metrics-server-audit-profiles\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:56.404345 master-0 kubenswrapper[28149]: I0313 12:53:56.404104 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:56.404345 master-0 kubenswrapper[28149]: I0313 12:53:56.404125 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-server-tls\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:56.404345 master-0 kubenswrapper[28149]: I0313 12:53:56.404165 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:56.404345 master-0 kubenswrapper[28149]: I0313 12:53:56.404191 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-client-ca-bundle\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:56.404731 master-0 kubenswrapper[28149]: I0313 12:53:56.404677 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-client-ca-bundle\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:56.404940 master-0 kubenswrapper[28149]: I0313 12:53:56.404933 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-metrics-server-audit-profiles\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:56.404992 master-0 kubenswrapper[28149]: I0313 12:53:56.404945 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-client-certs\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:56.405103 master-0 kubenswrapper[28149]: I0313 12:53:56.405079 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:56.405580 master-0 kubenswrapper[28149]: I0313 12:53:56.405309 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-server-tls\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:56.405580 master-0 kubenswrapper[28149]: I0313 12:53:56.405305 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5e4f10ca-6466-4ac0-aeb7-325e40473e04-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-blvhm\" (UID: \"5e4f10ca-6466-4ac0-aeb7-325e40473e04\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-blvhm" Mar 13 12:53:56.405580 master-0 kubenswrapper[28149]: I0313 12:53:56.405471 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-567b9cf7f-cxnj2\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:53:56.405751 master-0 kubenswrapper[28149]: I0313 12:53:56.405592 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/842251bd-238a-44ba-99fc-a356503f5d16-node-exporter-tls\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:56.405751 master-0 kubenswrapper[28149]: I0313 12:53:56.405615 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/36ad5a83-5c32-4941-94e0-7af86ac5d462-webhook-certs\") pod \"multus-admission-controller-7769569c45-qz88j\" (UID: \"36ad5a83-5c32-4941-94e0-7af86ac5d462\") " pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" Mar 13 12:53:56.405751 master-0 kubenswrapper[28149]: I0313 12:53:56.405671 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b726x\" (UniqueName: \"kubernetes.io/projected/1081e565-b7d8-4b6e-9d41-5db36cfe094c-kube-api-access-b726x\") pod \"openshift-state-metrics-74cc79fd76-clrbz\" (UID: \"1081e565-b7d8-4b6e-9d41-5db36cfe094c\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-clrbz" Mar 13 12:53:56.419384 master-0 kubenswrapper[28149]: I0313 12:53:56.419341 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdxqb\" (UniqueName: \"kubernetes.io/projected/00d8a21b-701c-4334-9dda-34c28b417f42-kube-api-access-bdxqb\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg\" (UID: \"00d8a21b-701c-4334-9dda-34c28b417f42\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-x2wlg" Mar 13 12:53:56.441703 master-0 kubenswrapper[28149]: I0313 12:53:56.441644 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j5fc\" (UniqueName: \"kubernetes.io/projected/d6226325-c4d9-497e-8d19-a71adc66c5ac-kube-api-access-4j5fc\") pod \"ovnkube-node-h8fwp\" (UID: \"d6226325-c4d9-497e-8d19-a71adc66c5ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:53:56.457811 master-0 kubenswrapper[28149]: I0313 12:53:56.457762 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlvjp\" (UniqueName: \"kubernetes.io/projected/5ae41cff-0949-47f8-aae9-ae133191476d-kube-api-access-mlvjp\") pod \"ovnkube-control-plane-66b55d57d-5cww5\" (UID: \"5ae41cff-0949-47f8-aae9-ae133191476d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5cww5" Mar 13 12:53:56.477996 master-0 kubenswrapper[28149]: I0313 12:53:56.477930 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n85n6\" (UniqueName: \"kubernetes.io/projected/915aabfe-1071-4bfc-b291-424304dfe7d8-kube-api-access-n85n6\") pod \"operator-controller-controller-manager-6598bfb6c4-dv8rj\" (UID: \"915aabfe-1071-4bfc-b291-424304dfe7d8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:53:56.497957 master-0 kubenswrapper[28149]: I0313 12:53:56.497894 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69hws\" (UniqueName: \"kubernetes.io/projected/d7d67915-d31e-46dc-bb2e-1a6f689dd875-kube-api-access-69hws\") pod \"cluster-storage-operator-6fbfc8dc8f-jhtsp\" (UID: \"d7d67915-d31e-46dc-bb2e-1a6f689dd875\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-jhtsp" Mar 13 12:53:56.517762 master-0 kubenswrapper[28149]: I0313 12:53:56.517699 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnw9d\" (UniqueName: \"kubernetes.io/projected/4dd0fc2f-f2ee-4447-a747-04a178288cf0-kube-api-access-fnw9d\") pod \"network-operator-7c649bf6d4-kh6n9\" (UID: \"4dd0fc2f-f2ee-4447-a747-04a178288cf0\") " pod="openshift-network-operator/network-operator-7c649bf6d4-kh6n9" Mar 13 12:53:56.537307 master-0 kubenswrapper[28149]: I0313 12:53:56.537245 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5nb7\" (UniqueName: \"kubernetes.io/projected/d3d998ee-b26f-4e30-83bc-f94f8c68060a-kube-api-access-x5nb7\") pod \"marketplace-operator-64bf9778cb-7qhr4\" (UID: \"d3d998ee-b26f-4e30-83bc-f94f8c68060a\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:53:56.563889 master-0 kubenswrapper[28149]: I0313 12:53:56.563821 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdlrq\" (UniqueName: \"kubernetes.io/projected/d44112d1-b2a5-4b8d-b74d-1e91638508d5-kube-api-access-tdlrq\") pod \"cluster-autoscaler-operator-69576476f7-sqndx\" (UID: \"d44112d1-b2a5-4b8d-b74d-1e91638508d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-sqndx" Mar 13 12:53:56.576905 master-0 kubenswrapper[28149]: I0313 12:53:56.576850 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd4m8\" (UniqueName: \"kubernetes.io/projected/be89c006-0c82-4728-9c79-210303e623dc-kube-api-access-dd4m8\") pod \"prometheus-operator-5ff8674d55-bvmsj\" (UID: \"be89c006-0c82-4728-9c79-210303e623dc\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bvmsj" Mar 13 12:53:56.601841 master-0 kubenswrapper[28149]: I0313 12:53:56.601764 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqsh5\" (UniqueName: \"kubernetes.io/projected/36ad5a83-5c32-4941-94e0-7af86ac5d462-kube-api-access-mqsh5\") pod \"multus-admission-controller-7769569c45-qz88j\" (UID: \"36ad5a83-5c32-4941-94e0-7af86ac5d462\") " pod="openshift-multus/multus-admission-controller-7769569c45-qz88j" Mar 13 12:53:56.619530 master-0 kubenswrapper[28149]: I0313 12:53:56.619383 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cf2v\" (UniqueName: \"kubernetes.io/projected/8c62b15f-001a-4b64-b85f-348aefde5d1b-kube-api-access-8cf2v\") pod \"openshift-controller-manager-operator-8565d84698-hj2wk\" (UID: \"8c62b15f-001a-4b64-b85f-348aefde5d1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-hj2wk" Mar 13 12:53:56.637954 master-0 kubenswrapper[28149]: I0313 12:53:56.637890 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km69t\" (UniqueName: \"kubernetes.io/projected/152689b1-5875-4a9a-bb25-bee858523168-kube-api-access-km69t\") pod \"multus-additional-cni-plugins-78p2k\" (UID: \"152689b1-5875-4a9a-bb25-bee858523168\") " pod="openshift-multus/multus-additional-cni-plugins-78p2k" Mar 13 12:53:56.657620 master-0 kubenswrapper[28149]: I0313 12:53:56.657544 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v2jm\" (UniqueName: \"kubernetes.io/projected/842251bd-238a-44ba-99fc-a356503f5d16-kube-api-access-9v2jm\") pod \"node-exporter-v4hdh\" (UID: \"842251bd-238a-44ba-99fc-a356503f5d16\") " pod="openshift-monitoring/node-exporter-v4hdh" Mar 13 12:53:56.676761 master-0 kubenswrapper[28149]: I0313 12:53:56.676701 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v66x\" (UniqueName: \"kubernetes.io/projected/317af639-269e-4163-8e24-fcea468b9352-kube-api-access-4v66x\") pod \"cluster-baremetal-operator-5cdb4c5598-l6jp5\" (UID: \"317af639-269e-4163-8e24-fcea468b9352\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-l6jp5" Mar 13 12:53:56.698067 master-0 kubenswrapper[28149]: I0313 12:53:56.698011 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcf05594-4c10-4b54-a47c-d55e323f1f87-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-q287n\" (UID: \"bcf05594-4c10-4b54-a47c-d55e323f1f87\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-q287n" Mar 13 12:53:56.718912 master-0 kubenswrapper[28149]: I0313 12:53:56.718845 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brzd4\" (UniqueName: \"kubernetes.io/projected/1f43b4e7-5cd1-46d2-a02e-0d846b2e5182-kube-api-access-brzd4\") pod \"network-node-identity-qg8q5\" (UID: \"1f43b4e7-5cd1-46d2-a02e-0d846b2e5182\") " pod="openshift-network-node-identity/network-node-identity-qg8q5" Mar 13 12:53:56.724197 master-0 kubenswrapper[28149]: I0313 12:53:56.724116 28149 request.go:700] Waited for 3.911776605s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/control-plane-machine-set-operator/token Mar 13 12:53:56.738386 master-0 kubenswrapper[28149]: I0313 12:53:56.738305 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxcvd\" (UniqueName: \"kubernetes.io/projected/747659a6-4a1e-43ed-bb8e-36da6e63b5a1-kube-api-access-qxcvd\") pod \"control-plane-machine-set-operator-6686554ddc-btz8w\" (UID: \"747659a6-4a1e-43ed-bb8e-36da6e63b5a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-btz8w" Mar 13 12:53:56.760955 master-0 kubenswrapper[28149]: I0313 12:53:56.760888 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65ts9\" (UniqueName: \"kubernetes.io/projected/c0f3e81c-f61d-430a-98e8-82e3b283fc73-kube-api-access-65ts9\") pod \"service-ca-84bfdbbb7f-4pksg\" (UID: \"c0f3e81c-f61d-430a-98e8-82e3b283fc73\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4pksg" Mar 13 12:53:56.777522 master-0 kubenswrapper[28149]: I0313 12:53:56.777448 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clrz7\" (UniqueName: \"kubernetes.io/projected/15b592d6-3c48-45d4-9172-d28632ae8995-kube-api-access-clrz7\") pod \"etcd-operator-5884b9cd56-hjzms\" (UID: \"15b592d6-3c48-45d4-9172-d28632ae8995\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-hjzms" Mar 13 12:53:56.798281 master-0 kubenswrapper[28149]: I0313 12:53:56.798219 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbwwp\" (UniqueName: \"kubernetes.io/projected/50be3c2b-284b-4f60-b4ed-2cc7b4e528fa-kube-api-access-jbwwp\") pod \"machine-config-daemon-5h8rc\" (UID: \"50be3c2b-284b-4f60-b4ed-2cc7b4e528fa\") " pod="openshift-machine-config-operator/machine-config-daemon-5h8rc" Mar 13 12:53:56.818521 master-0 kubenswrapper[28149]: I0313 12:53:56.818417 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t2jl\" (UniqueName: \"kubernetes.io/projected/c642c18f-f960-4418-bcb7-df884f8f8ad5-kube-api-access-8t2jl\") pod \"csi-snapshot-controller-7577d6f48-pjpn2\" (UID: \"c642c18f-f960-4418-bcb7-df884f8f8ad5\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-pjpn2" Mar 13 12:53:56.838376 master-0 kubenswrapper[28149]: I0313 12:53:56.838302 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmfxj\" (UniqueName: \"kubernetes.io/projected/887d261f-d07f-4ef0-a230-6568f47acf4d-kube-api-access-pmfxj\") pod \"cluster-olm-operator-77899cf6d-7nvbn\" (UID: \"887d261f-d07f-4ef0-a230-6568f47acf4d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7nvbn" Mar 13 12:53:56.869799 master-0 kubenswrapper[28149]: I0313 12:53:56.862992 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9hks\" (UniqueName: \"kubernetes.io/projected/2f79578c-bbfb-4968-893a-730deb4c01f9-kube-api-access-f9hks\") pod \"ingress-operator-677db989d6-ckl2j\" (UID: \"2f79578c-bbfb-4968-893a-730deb4c01f9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" Mar 13 12:53:56.886731 master-0 kubenswrapper[28149]: I0313 12:53:56.886664 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676b054a-e76f-425d-a6ff-3f1bea8b523e-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-98tv2\" (UID: \"676b054a-e76f-425d-a6ff-3f1bea8b523e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-98tv2" Mar 13 12:53:56.900788 master-0 kubenswrapper[28149]: I0313 12:53:56.900707 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-qxmnf\" (UID: \"ec5ec2e2-f7b3-43a1-87da-fbbe0ee5b118\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-qxmnf" Mar 13 12:53:56.919619 master-0 kubenswrapper[28149]: I0313 12:53:56.919560 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbk4f\" (UniqueName: \"kubernetes.io/projected/10944f9c-8ce9-44e6-9c36-a0ea19d8cae3-kube-api-access-zbk4f\") pod \"catalog-operator-7d9c49f57b-tlnkd\" (UID: \"10944f9c-8ce9-44e6-9c36-a0ea19d8cae3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:53:56.940720 master-0 kubenswrapper[28149]: I0313 12:53:56.940680 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77ef7e49-eb85-4f5e-94d3-a6a8619a6243-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-br96g\" (UID: \"77ef7e49-eb85-4f5e-94d3-a6a8619a6243\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-br96g" Mar 13 12:53:56.957409 master-0 kubenswrapper[28149]: I0313 12:53:56.957358 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x8kz\" (UniqueName: \"kubernetes.io/projected/3d653e1a-5903-4a02-9357-df145f028c0d-kube-api-access-6x8kz\") pod \"package-server-manager-854648ff6d-669qk\" (UID: \"3d653e1a-5903-4a02-9357-df145f028c0d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:53:56.977062 master-0 kubenswrapper[28149]: I0313 12:53:56.977009 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgbvr\" (UniqueName: \"kubernetes.io/projected/ce3a655a-0684-4bc5-ac36-5878507537c7-kube-api-access-vgbvr\") pod \"multus-bnn7n\" (UID: \"ce3a655a-0684-4bc5-ac36-5878507537c7\") " pod="openshift-multus/multus-bnn7n" Mar 13 12:53:56.999217 master-0 kubenswrapper[28149]: I0313 12:53:56.998330 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmzhw\" (UniqueName: \"kubernetes.io/projected/18ffa620-dacc-4b09-be04-2c325f860813-kube-api-access-fmzhw\") pod \"route-controller-manager-68c48d4f7d-k7drw\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:53:57.024096 master-0 kubenswrapper[28149]: I0313 12:53:57.023832 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhddd\" (UniqueName: \"kubernetes.io/projected/2f48243b-6b05-4efa-8420-58a4419622bf-kube-api-access-qhddd\") pod \"apiserver-844bc54c88-vznst\" (UID: \"2f48243b-6b05-4efa-8420-58a4419622bf\") " pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:57.037284 master-0 kubenswrapper[28149]: I0313 12:53:57.037225 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxjbd\" (UniqueName: \"kubernetes.io/projected/ef42b65e-2d92-46ac-baaf-30e213787781-kube-api-access-xxjbd\") pod \"dns-default-m7k6m\" (UID: \"ef42b65e-2d92-46ac-baaf-30e213787781\") " pod="openshift-dns/dns-default-m7k6m" Mar 13 12:53:57.086112 master-0 kubenswrapper[28149]: I0313 12:53:57.086042 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0da84bb7-e936-49a0-96b5-614a1305d6a4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-m8mqj\" (UID: \"0da84bb7-e936-49a0-96b5-614a1305d6a4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-m8mqj" Mar 13 12:53:57.087035 master-0 kubenswrapper[28149]: I0313 12:53:57.086839 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn8f2\" (UniqueName: \"kubernetes.io/projected/a454234a-6c8e-4916-81e8-c9e66cec9d31-kube-api-access-kn8f2\") pod \"controller-manager-54c79cbfcc-cxhmh\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:57.099065 master-0 kubenswrapper[28149]: I0313 12:53:57.098995 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8hcd\" (UniqueName: \"kubernetes.io/projected/d5a19b80-d488-46d3-a4a8-0b80361077e1-kube-api-access-p8hcd\") pod \"olm-operator-d64cfc9db-rfqb9\" (UID: \"d5a19b80-d488-46d3-a4a8-0b80361077e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:53:57.117108 master-0 kubenswrapper[28149]: I0313 12:53:57.117055 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8gcb\" (UniqueName: \"kubernetes.io/projected/e25bef76-7020-4f86-8dee-a58ebed537d2-kube-api-access-r8gcb\") pod \"machine-config-controller-ff46b7bdf-kmnlv\" (UID: \"e25bef76-7020-4f86-8dee-a58ebed537d2\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-kmnlv" Mar 13 12:53:57.139129 master-0 kubenswrapper[28149]: I0313 12:53:57.138993 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwkdj\" (UniqueName: \"kubernetes.io/projected/f0803181-4e37-43fa-8ddc-9c76d3f61817-kube-api-access-lwkdj\") pod \"openshift-config-operator-64488f9d78-t8fb4\" (UID: \"f0803181-4e37-43fa-8ddc-9c76d3f61817\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:53:57.275016 master-0 kubenswrapper[28149]: I0313 12:53:57.161119 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwk62\" (UniqueName: \"kubernetes.io/projected/f31565e2-c211-4d28-8bbc-d7a951023a8b-kube-api-access-kwk62\") pod \"migrator-57ccdf9b5-7pcdp\" (UID: \"f31565e2-c211-4d28-8bbc-d7a951023a8b\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-7pcdp" Mar 13 12:53:57.295844 master-0 kubenswrapper[28149]: E0313 12:53:57.295802 28149 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.596s" Mar 13 12:53:57.296093 master-0 kubenswrapper[28149]: E0313 12:53:57.295933 28149 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:57.296214 master-0 kubenswrapper[28149]: I0313 12:53:57.296197 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:57.296981 master-0 kubenswrapper[28149]: I0313 12:53:57.296295 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"49a3e19d955348f7e8d6cddcf11b1118a4c6f32a3b5d7a34d5989aaa73b1262c"} Mar 13 12:53:57.297098 master-0 kubenswrapper[28149]: I0313 12:53:57.297075 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" event={"ID":"2f79578c-bbfb-4968-893a-730deb4c01f9","Type":"ContainerDied","Data":"062296caf4aa99e0b771a3fc7c5b24a99b64a55a1235aefba1f6f98aec258e8a"} Mar 13 12:53:57.297234 master-0 kubenswrapper[28149]: I0313 12:53:57.297218 28149 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:57.297351 master-0 kubenswrapper[28149]: I0313 12:53:57.297304 28149 scope.go:117] "RemoveContainer" containerID="25a4898dab96b21910d2f9f74a6d0f38ac67afd0471454539094f0cdc130c4f5" Mar 13 12:53:57.297406 master-0 kubenswrapper[28149]: E0313 12:53:57.297237 28149 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:57.297449 master-0 kubenswrapper[28149]: E0313 12:53:57.297165 28149 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 13 12:53:57.297551 master-0 kubenswrapper[28149]: I0313 12:53:57.297536 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:53:57.307920 master-0 kubenswrapper[28149]: E0313 12:53:57.307885 28149 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:53:57.307920 master-0 kubenswrapper[28149]: E0313 12:53:57.307917 28149 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:53:57.308162 master-0 kubenswrapper[28149]: E0313 12:53:57.307996 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access podName:185a10f7-2a4b-4171-b10d-4614cb8671bd nodeName:}" failed. No retries permitted until 2026-03-13 12:53:57.807973509 +0000 UTC m=+11.461438668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access") pod "installer-4-master-0" (UID: "185a10f7-2a4b-4171-b10d-4614cb8671bd") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:53:57.311712 master-0 kubenswrapper[28149]: I0313 12:53:57.311673 28149 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 13 12:53:57.346642 master-0 kubenswrapper[28149]: I0313 12:53:57.346601 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:57.346826 master-0 kubenswrapper[28149]: I0313 12:53:57.346667 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:53:57.346826 master-0 kubenswrapper[28149]: I0313 12:53:57.346771 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:53:57.346826 master-0 kubenswrapper[28149]: I0313 12:53:57.346800 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 12:53:57.346826 master-0 kubenswrapper[28149]: I0313 12:53:57.346816 28149 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="34fee065-9e14-4e33-accf-5cf37f68d8c0" Mar 13 12:53:57.346926 master-0 kubenswrapper[28149]: I0313 12:53:57.346841 28149 scope.go:117] "RemoveContainer" containerID="838f1203bfc2909f5be268d039e5903c4aada457bcd573b0395f4215bfc0c446" Mar 13 12:53:57.346984 master-0 kubenswrapper[28149]: I0313 12:53:57.346939 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf" Mar 13 12:53:57.347512 master-0 kubenswrapper[28149]: I0313 12:53:57.347431 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:57.347512 master-0 kubenswrapper[28149]: I0313 12:53:57.347453 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 12:53:57.347512 master-0 kubenswrapper[28149]: I0313 12:53:57.347462 28149 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="34fee065-9e14-4e33-accf-5cf37f68d8c0" Mar 13 12:53:57.347512 master-0 kubenswrapper[28149]: I0313 12:53:57.347472 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-pmzkf" Mar 13 12:53:57.347512 master-0 kubenswrapper[28149]: I0313 12:53:57.347512 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:57.347753 master-0 kubenswrapper[28149]: I0313 12:53:57.347526 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:57.347753 master-0 kubenswrapper[28149]: I0313 12:53:57.347538 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:57.347753 master-0 kubenswrapper[28149]: I0313 12:53:57.347564 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:53:57.347753 master-0 kubenswrapper[28149]: I0313 12:53:57.347633 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:53:57.347753 master-0 kubenswrapper[28149]: I0313 12:53:57.347656 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tlnkd" Mar 13 12:53:57.347753 master-0 kubenswrapper[28149]: I0313 12:53:57.347675 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:53:57.347753 master-0 kubenswrapper[28149]: I0313 12:53:57.347703 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:53:57.347753 master-0 kubenswrapper[28149]: I0313 12:53:57.347730 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:57.347753 master-0 kubenswrapper[28149]: I0313 12:53:57.347748 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" Mar 13 12:53:57.348006 master-0 kubenswrapper[28149]: I0313 12:53:57.347762 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-669qk" Mar 13 12:53:57.348006 master-0 kubenswrapper[28149]: I0313 12:53:57.347784 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-m7k6m" Mar 13 12:53:57.348006 master-0 kubenswrapper[28149]: I0313 12:53:57.347813 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:53:57.348006 master-0 kubenswrapper[28149]: I0313 12:53:57.347931 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:57.348257 master-0 kubenswrapper[28149]: I0313 12:53:57.348023 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rfqb9" Mar 13 12:53:57.348257 master-0 kubenswrapper[28149]: I0313 12:53:57.348056 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:53:57.348257 master-0 kubenswrapper[28149]: I0313 12:53:57.348079 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-m7k6m" Mar 13 12:53:57.348257 master-0 kubenswrapper[28149]: I0313 12:53:57.348147 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:53:57.348257 master-0 kubenswrapper[28149]: I0313 12:53:57.348176 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:53:57.372543 master-0 kubenswrapper[28149]: I0313 12:53:57.372509 28149 scope.go:117] "RemoveContainer" containerID="f3be2171b1690f9bafcc889e55d83ff1a441baaed77d90117edebfc3db8ff2b9" Mar 13 12:53:57.392208 master-0 kubenswrapper[28149]: I0313 12:53:57.392168 28149 scope.go:117] "RemoveContainer" containerID="a3279720d4c802c349d222cf1b96260384211d9adc25c84b50972505c95ca211" Mar 13 12:53:57.559663 master-0 kubenswrapper[28149]: I0313 12:53:57.559592 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 13 12:53:57.572707 master-0 kubenswrapper[28149]: I0313 12:53:57.572632 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 13 12:53:57.885849 master-0 kubenswrapper[28149]: I0313 12:53:57.885778 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:53:57.887077 master-0 kubenswrapper[28149]: E0313 12:53:57.886047 28149 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:53:57.887077 master-0 kubenswrapper[28149]: E0313 12:53:57.886334 28149 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:53:57.887077 master-0 kubenswrapper[28149]: E0313 12:53:57.886435 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access podName:185a10f7-2a4b-4171-b10d-4614cb8671bd nodeName:}" failed. No retries permitted until 2026-03-13 12:53:58.886406816 +0000 UTC m=+12.539872015 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access") pod "installer-4-master-0" (UID: "185a10f7-2a4b-4171-b10d-4614cb8671bd") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:53:57.985862 master-0 kubenswrapper[28149]: I0313 12:53:57.985751 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:57.985862 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:53:57.985862 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:53:57.985862 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:53:57.986245 master-0 kubenswrapper[28149]: I0313 12:53:57.985864 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:58.164216 master-0 kubenswrapper[28149]: I0313 12:53:58.164081 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/5.log" Mar 13 12:53:58.165158 master-0 kubenswrapper[28149]: I0313 12:53:58.165107 28149 scope.go:117] "RemoveContainer" containerID="062296caf4aa99e0b771a3fc7c5b24a99b64a55a1235aefba1f6f98aec258e8a" Mar 13 12:53:58.167318 master-0 kubenswrapper[28149]: I0313 12:53:58.167278 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:53:58.304539 master-0 kubenswrapper[28149]: I0313 12:53:58.304498 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:53:58.309630 master-0 kubenswrapper[28149]: I0313 12:53:58.309576 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" Mar 13 12:53:58.962448 master-0 kubenswrapper[28149]: I0313 12:53:58.962400 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:53:58.962947 master-0 kubenswrapper[28149]: E0313 12:53:58.962585 28149 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:53:58.962947 master-0 kubenswrapper[28149]: E0313 12:53:58.962607 28149 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:53:58.962947 master-0 kubenswrapper[28149]: E0313 12:53:58.962680 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access podName:185a10f7-2a4b-4171-b10d-4614cb8671bd nodeName:}" failed. No retries permitted until 2026-03-13 12:54:00.962659412 +0000 UTC m=+14.616124571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access") pod "installer-4-master-0" (UID: "185a10f7-2a4b-4171-b10d-4614cb8671bd") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:53:58.983102 master-0 kubenswrapper[28149]: I0313 12:53:58.983029 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:58.983102 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:53:58.983102 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:53:58.983102 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:53:58.983390 master-0 kubenswrapper[28149]: I0313 12:53:58.983121 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:53:59.058347 master-0 kubenswrapper[28149]: I0313 12:53:59.058277 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:59.058804 master-0 kubenswrapper[28149]: I0313 12:53:59.058720 28149 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 13 12:53:59.058864 master-0 kubenswrapper[28149]: I0313 12:53:59.058817 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 12:53:59.178754 master-0 kubenswrapper[28149]: I0313 12:53:59.178684 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-ckl2j_2f79578c-bbfb-4968-893a-730deb4c01f9/ingress-operator/5.log" Mar 13 12:53:59.179309 master-0 kubenswrapper[28149]: I0313 12:53:59.179254 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-ckl2j" event={"ID":"2f79578c-bbfb-4968-893a-730deb4c01f9","Type":"ContainerStarted","Data":"b1c532fbdd98b8b2992d1bbd883cd761f6beba1f990baddccbaab18d2808016e"} Mar 13 12:53:59.179741 master-0 kubenswrapper[28149]: I0313 12:53:59.179707 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:53:59.217225 master-0 kubenswrapper[28149]: I0313 12:53:59.217096 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:53:59.488493 master-0 kubenswrapper[28149]: I0313 12:53:59.488368 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:53:59.492701 master-0 kubenswrapper[28149]: I0313 12:53:59.492657 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-5c5f6764b5-96ktp" Mar 13 12:53:59.633281 master-0 kubenswrapper[28149]: I0313 12:53:59.633225 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:59.639364 master-0 kubenswrapper[28149]: I0313 12:53:59.639314 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:59.790863 master-0 kubenswrapper[28149]: I0313 12:53:59.790774 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=7.790754336 podStartE2EDuration="7.790754336s" podCreationTimestamp="2026-03-13 12:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:53:59.789824929 +0000 UTC m=+13.443290118" watchObservedRunningTime="2026-03-13 12:53:59.790754336 +0000 UTC m=+13.444219485" Mar 13 12:53:59.926063 master-0 kubenswrapper[28149]: I0313 12:53:59.926001 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:53:59.927631 master-0 kubenswrapper[28149]: I0313 12:53:59.927593 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7qhr4" Mar 13 12:53:59.986233 master-0 kubenswrapper[28149]: I0313 12:53:59.985195 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:53:59.986233 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:53:59.986233 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:53:59.986233 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:53:59.986233 master-0 kubenswrapper[28149]: I0313 12:53:59.985254 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:00.189157 master-0 kubenswrapper[28149]: I0313 12:54:00.189036 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:54:00.244034 master-0 kubenswrapper[28149]: I0313 12:54:00.243963 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:54:00.246850 master-0 kubenswrapper[28149]: I0313 12:54:00.246815 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-8fjzg" Mar 13 12:54:00.283398 master-0 kubenswrapper[28149]: I0313 12:54:00.283347 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:54:00.287770 master-0 kubenswrapper[28149]: I0313 12:54:00.287730 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:54:00.327188 master-0 kubenswrapper[28149]: I0313 12:54:00.326839 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:54:00.993213 master-0 kubenswrapper[28149]: I0313 12:54:00.984331 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:00.993213 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:00.993213 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:00.993213 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:00.993213 master-0 kubenswrapper[28149]: I0313 12:54:00.984407 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:00.993213 master-0 kubenswrapper[28149]: I0313 12:54:00.991193 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:54:00.993213 master-0 kubenswrapper[28149]: E0313 12:54:00.991395 28149 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:54:00.993213 master-0 kubenswrapper[28149]: E0313 12:54:00.991435 28149 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:54:00.993213 master-0 kubenswrapper[28149]: E0313 12:54:00.991505 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access podName:185a10f7-2a4b-4171-b10d-4614cb8671bd nodeName:}" failed. No retries permitted until 2026-03-13 12:54:04.991480925 +0000 UTC m=+18.644946084 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access") pod "installer-4-master-0" (UID: "185a10f7-2a4b-4171-b10d-4614cb8671bd") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:54:01.285875 master-0 kubenswrapper[28149]: I0313 12:54:01.285841 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-787dbf5bb9-5645n" Mar 13 12:54:01.455686 master-0 kubenswrapper[28149]: I0313 12:54:01.455625 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:54:01.516215 master-0 kubenswrapper[28149]: I0313 12:54:01.515736 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:54:01.984697 master-0 kubenswrapper[28149]: I0313 12:54:01.984580 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:01.984697 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:01.984697 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:01.984697 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:01.984697 master-0 kubenswrapper[28149]: I0313 12:54:01.984692 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:02.164631 master-0 kubenswrapper[28149]: I0313 12:54:02.164590 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-844bc54c88-vznst" Mar 13 12:54:02.543244 master-0 kubenswrapper[28149]: I0313 12:54:02.543161 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:54:02.672130 master-0 kubenswrapper[28149]: I0313 12:54:02.588742 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:54:02.673833 master-0 kubenswrapper[28149]: I0313 12:54:02.673778 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:54:02.747164 master-0 kubenswrapper[28149]: I0313 12:54:02.746662 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:54:02.801697 master-0 kubenswrapper[28149]: I0313 12:54:02.801565 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:54:02.984956 master-0 kubenswrapper[28149]: I0313 12:54:02.984843 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:02.984956 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:02.984956 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:02.984956 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:02.985364 master-0 kubenswrapper[28149]: I0313 12:54:02.984961 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:03.211240 master-0 kubenswrapper[28149]: I0313 12:54:03.211102 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:54:03.211755 master-0 kubenswrapper[28149]: I0313 12:54:03.211654 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:54:03.211755 master-0 kubenswrapper[28149]: I0313 12:54:03.211668 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:54:03.353165 master-0 kubenswrapper[28149]: I0313 12:54:03.341776 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p9csk" Mar 13 12:54:03.820436 master-0 kubenswrapper[28149]: I0313 12:54:03.820379 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:54:03.822702 master-0 kubenswrapper[28149]: I0313 12:54:03.822654 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-pnwsc" Mar 13 12:54:03.984168 master-0 kubenswrapper[28149]: I0313 12:54:03.984093 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:03.984168 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:03.984168 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:03.984168 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:03.984507 master-0 kubenswrapper[28149]: I0313 12:54:03.984196 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:04.293856 master-0 kubenswrapper[28149]: I0313 12:54:04.293222 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:54:04.333756 master-0 kubenswrapper[28149]: I0313 12:54:04.333722 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5czx2" Mar 13 12:54:04.982811 master-0 kubenswrapper[28149]: I0313 12:54:04.982750 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:04.982811 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:04.982811 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:04.982811 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:04.983087 master-0 kubenswrapper[28149]: I0313 12:54:04.982849 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:05.011171 master-0 kubenswrapper[28149]: I0313 12:54:05.011102 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:54:05.011415 master-0 kubenswrapper[28149]: E0313 12:54:05.011372 28149 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:54:05.011462 master-0 kubenswrapper[28149]: E0313 12:54:05.011422 28149 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:54:05.011528 master-0 kubenswrapper[28149]: E0313 12:54:05.011506 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access podName:185a10f7-2a4b-4171-b10d-4614cb8671bd nodeName:}" failed. No retries permitted until 2026-03-13 12:54:13.011479264 +0000 UTC m=+26.664944423 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access") pod "installer-4-master-0" (UID: "185a10f7-2a4b-4171-b10d-4614cb8671bd") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:54:05.050271 master-0 kubenswrapper[28149]: I0313 12:54:05.050203 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:54:05.085936 master-0 kubenswrapper[28149]: I0313 12:54:05.085885 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9x9vk" Mar 13 12:54:05.289639 master-0 kubenswrapper[28149]: I0313 12:54:05.289599 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:54:05.982967 master-0 kubenswrapper[28149]: I0313 12:54:05.982906 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:05.982967 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:05.982967 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:05.982967 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:05.983623 master-0 kubenswrapper[28149]: I0313 12:54:05.982990 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:05.985349 master-0 kubenswrapper[28149]: I0313 12:54:05.985305 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:54:05.985575 master-0 kubenswrapper[28149]: I0313 12:54:05.985546 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:54:05.985575 master-0 kubenswrapper[28149]: I0313 12:54:05.985571 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:54:06.013480 master-0 kubenswrapper[28149]: I0313 12:54:06.013428 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:54:06.272341 master-0 kubenswrapper[28149]: I0313 12:54:06.272287 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:54:06.826842 master-0 kubenswrapper[28149]: I0313 12:54:06.826808 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:54:06.874563 master-0 kubenswrapper[28149]: I0313 12:54:06.874528 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zh888" Mar 13 12:54:06.983905 master-0 kubenswrapper[28149]: I0313 12:54:06.983830 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:06.983905 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:06.983905 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:06.983905 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:06.984533 master-0 kubenswrapper[28149]: I0313 12:54:06.983932 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:07.982169 master-0 kubenswrapper[28149]: I0313 12:54:07.982084 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:07.982169 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:07.982169 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:07.982169 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:07.982169 master-0 kubenswrapper[28149]: I0313 12:54:07.982166 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:08.983224 master-0 kubenswrapper[28149]: I0313 12:54:08.983122 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:08.983224 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:08.983224 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:08.983224 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:08.983803 master-0 kubenswrapper[28149]: I0313 12:54:08.983243 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:09.058740 master-0 kubenswrapper[28149]: I0313 12:54:09.058648 28149 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 13 12:54:09.059254 master-0 kubenswrapper[28149]: I0313 12:54:09.058752 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 12:54:09.222186 master-0 kubenswrapper[28149]: I0313 12:54:09.222123 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:09.632984 master-0 kubenswrapper[28149]: I0313 12:54:09.632918 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:54:09.633229 master-0 kubenswrapper[28149]: I0313 12:54:09.633127 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:54:09.655882 master-0 kubenswrapper[28149]: I0313 12:54:09.655833 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h8fwp" Mar 13 12:54:09.982954 master-0 kubenswrapper[28149]: I0313 12:54:09.982834 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:09.982954 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:09.982954 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:09.982954 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:09.982954 master-0 kubenswrapper[28149]: I0313 12:54:09.982908 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:10.983741 master-0 kubenswrapper[28149]: I0313 12:54:10.983674 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:10.983741 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:10.983741 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:10.983741 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:10.984569 master-0 kubenswrapper[28149]: I0313 12:54:10.983749 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:11.983304 master-0 kubenswrapper[28149]: I0313 12:54:11.983259 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:11.983304 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:11.983304 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:11.983304 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:11.983579 master-0 kubenswrapper[28149]: I0313 12:54:11.983326 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:12.987871 master-0 kubenswrapper[28149]: I0313 12:54:12.987353 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:12.987871 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:12.987871 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:12.987871 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:12.987871 master-0 kubenswrapper[28149]: I0313 12:54:12.987453 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:13.071846 master-0 kubenswrapper[28149]: I0313 12:54:13.071800 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:54:13.072068 master-0 kubenswrapper[28149]: E0313 12:54:13.072052 28149 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:54:13.072108 master-0 kubenswrapper[28149]: E0313 12:54:13.072075 28149 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:54:13.072187 master-0 kubenswrapper[28149]: E0313 12:54:13.072147 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access podName:185a10f7-2a4b-4171-b10d-4614cb8671bd nodeName:}" failed. No retries permitted until 2026-03-13 12:54:29.072110388 +0000 UTC m=+42.725575547 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access") pod "installer-4-master-0" (UID: "185a10f7-2a4b-4171-b10d-4614cb8671bd") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:54:13.984243 master-0 kubenswrapper[28149]: I0313 12:54:13.984162 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:13.984243 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:13.984243 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:13.984243 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:13.984544 master-0 kubenswrapper[28149]: I0313 12:54:13.984294 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:14.404078 master-0 kubenswrapper[28149]: I0313 12:54:14.403979 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:54:14.499914 master-0 kubenswrapper[28149]: I0313 12:54:14.499852 28149 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:54:14.500271 master-0 kubenswrapper[28149]: I0313 12:54:14.500074 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" containerID="cri-o://4ea900f27c90a68c3b8cd2345d580f77e20ef846c8a749fe70f5724228e5cc04" gracePeriod=5 Mar 13 12:54:14.983235 master-0 kubenswrapper[28149]: I0313 12:54:14.983150 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:14.983235 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:14.983235 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:14.983235 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:14.983235 master-0 kubenswrapper[28149]: I0313 12:54:14.983215 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:15.983266 master-0 kubenswrapper[28149]: I0313 12:54:15.983217 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:15.983266 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:15.983266 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:15.983266 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:15.983851 master-0 kubenswrapper[28149]: I0313 12:54:15.983276 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:16.982488 master-0 kubenswrapper[28149]: I0313 12:54:16.982431 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:16.982488 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:16.982488 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:16.982488 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:16.982854 master-0 kubenswrapper[28149]: I0313 12:54:16.982520 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:17.073591 master-0 kubenswrapper[28149]: I0313 12:54:17.073507 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:54:17.983647 master-0 kubenswrapper[28149]: I0313 12:54:17.983588 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:17.983647 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:17.983647 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:17.983647 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:17.984330 master-0 kubenswrapper[28149]: I0313 12:54:17.983671 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:18.984398 master-0 kubenswrapper[28149]: I0313 12:54:18.984321 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:18.984398 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:18.984398 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:18.984398 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:18.985350 master-0 kubenswrapper[28149]: I0313 12:54:18.984400 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:19.470019 master-0 kubenswrapper[28149]: I0313 12:54:19.469974 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:54:19.474153 master-0 kubenswrapper[28149]: I0313 12:54:19.474094 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:54:19.983312 master-0 kubenswrapper[28149]: I0313 12:54:19.983247 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:19.983312 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:19.983312 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:19.983312 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:19.983312 master-0 kubenswrapper[28149]: I0313 12:54:19.983309 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:20.089491 master-0 kubenswrapper[28149]: I0313 12:54:20.089446 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_3a18cac8a90d6913a6a0391d805cddc9/startup-monitor/0.log" Mar 13 12:54:20.090009 master-0 kubenswrapper[28149]: I0313 12:54:20.089553 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:20.244969 master-0 kubenswrapper[28149]: I0313 12:54:20.244848 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 13 12:54:20.244969 master-0 kubenswrapper[28149]: I0313 12:54:20.244950 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 13 12:54:20.245205 master-0 kubenswrapper[28149]: I0313 12:54:20.245014 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock" (OuterVolumeSpecName: "var-lock") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:54:20.245205 master-0 kubenswrapper[28149]: I0313 12:54:20.245045 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 13 12:54:20.245205 master-0 kubenswrapper[28149]: I0313 12:54:20.245108 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 13 12:54:20.245205 master-0 kubenswrapper[28149]: I0313 12:54:20.245116 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log" (OuterVolumeSpecName: "var-log") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:54:20.245205 master-0 kubenswrapper[28149]: I0313 12:54:20.245150 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 13 12:54:20.245205 master-0 kubenswrapper[28149]: I0313 12:54:20.245167 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests" (OuterVolumeSpecName: "manifests") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:54:20.245399 master-0 kubenswrapper[28149]: I0313 12:54:20.245283 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:54:20.245649 master-0 kubenswrapper[28149]: I0313 12:54:20.245622 28149 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:20.245710 master-0 kubenswrapper[28149]: I0313 12:54:20.245646 28149 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:20.245710 master-0 kubenswrapper[28149]: I0313 12:54:20.245661 28149 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:20.245710 master-0 kubenswrapper[28149]: I0313 12:54:20.245676 28149 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:20.249942 master-0 kubenswrapper[28149]: I0313 12:54:20.249890 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:54:20.292542 master-0 kubenswrapper[28149]: I0313 12:54:20.292491 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:54:20.295875 master-0 kubenswrapper[28149]: I0313 12:54:20.295846 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:54:20.347207 master-0 kubenswrapper[28149]: I0313 12:54:20.347162 28149 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:20.373694 master-0 kubenswrapper[28149]: I0313 12:54:20.373636 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_3a18cac8a90d6913a6a0391d805cddc9/startup-monitor/0.log" Mar 13 12:54:20.373694 master-0 kubenswrapper[28149]: I0313 12:54:20.373697 28149 generic.go:334] "Generic (PLEG): container finished" podID="3a18cac8a90d6913a6a0391d805cddc9" containerID="4ea900f27c90a68c3b8cd2345d580f77e20ef846c8a749fe70f5724228e5cc04" exitCode=137 Mar 13 12:54:20.374372 master-0 kubenswrapper[28149]: I0313 12:54:20.374342 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:20.380278 master-0 kubenswrapper[28149]: I0313 12:54:20.380245 28149 scope.go:117] "RemoveContainer" containerID="4ea900f27c90a68c3b8cd2345d580f77e20ef846c8a749fe70f5724228e5cc04" Mar 13 12:54:20.401822 master-0 kubenswrapper[28149]: I0313 12:54:20.400783 28149 scope.go:117] "RemoveContainer" containerID="4ea900f27c90a68c3b8cd2345d580f77e20ef846c8a749fe70f5724228e5cc04" Mar 13 12:54:20.401822 master-0 kubenswrapper[28149]: E0313 12:54:20.401354 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ea900f27c90a68c3b8cd2345d580f77e20ef846c8a749fe70f5724228e5cc04\": container with ID starting with 4ea900f27c90a68c3b8cd2345d580f77e20ef846c8a749fe70f5724228e5cc04 not found: ID does not exist" containerID="4ea900f27c90a68c3b8cd2345d580f77e20ef846c8a749fe70f5724228e5cc04" Mar 13 12:54:20.401822 master-0 kubenswrapper[28149]: I0313 12:54:20.401393 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ea900f27c90a68c3b8cd2345d580f77e20ef846c8a749fe70f5724228e5cc04"} err="failed to get container status \"4ea900f27c90a68c3b8cd2345d580f77e20ef846c8a749fe70f5724228e5cc04\": rpc error: code = NotFound desc = could not find container \"4ea900f27c90a68c3b8cd2345d580f77e20ef846c8a749fe70f5724228e5cc04\": container with ID starting with 4ea900f27c90a68c3b8cd2345d580f77e20ef846c8a749fe70f5724228e5cc04 not found: ID does not exist" Mar 13 12:54:20.408891 master-0 kubenswrapper[28149]: I0313 12:54:20.408840 28149 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="03beae43-edc0-4f67-8d1f-f315994fc97f" Mar 13 12:54:20.696485 master-0 kubenswrapper[28149]: I0313 12:54:20.696424 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a18cac8a90d6913a6a0391d805cddc9" path="/var/lib/kubelet/pods/3a18cac8a90d6913a6a0391d805cddc9/volumes" Mar 13 12:54:20.696851 master-0 kubenswrapper[28149]: I0313 12:54:20.696674 28149 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 13 12:54:20.721519 master-0 kubenswrapper[28149]: I0313 12:54:20.721469 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:54:20.721519 master-0 kubenswrapper[28149]: I0313 12:54:20.721509 28149 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="03beae43-edc0-4f67-8d1f-f315994fc97f" Mar 13 12:54:20.727542 master-0 kubenswrapper[28149]: I0313 12:54:20.727505 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:54:20.727640 master-0 kubenswrapper[28149]: I0313 12:54:20.727546 28149 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="03beae43-edc0-4f67-8d1f-f315994fc97f" Mar 13 12:54:20.982714 master-0 kubenswrapper[28149]: I0313 12:54:20.982595 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:20.982714 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:20.982714 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:20.982714 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:20.982714 master-0 kubenswrapper[28149]: I0313 12:54:20.982670 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:21.987949 master-0 kubenswrapper[28149]: I0313 12:54:21.987888 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:21.987949 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:21.987949 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:21.987949 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:21.988824 master-0 kubenswrapper[28149]: I0313 12:54:21.988005 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:23.010881 master-0 kubenswrapper[28149]: I0313 12:54:23.010790 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:23.010881 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:23.010881 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:23.010881 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:23.011638 master-0 kubenswrapper[28149]: I0313 12:54:23.010941 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:23.982891 master-0 kubenswrapper[28149]: I0313 12:54:23.982835 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:23.982891 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:23.982891 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:23.982891 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:23.983194 master-0 kubenswrapper[28149]: I0313 12:54:23.982900 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:24.984398 master-0 kubenswrapper[28149]: I0313 12:54:24.983292 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 12:54:24.984398 master-0 kubenswrapper[28149]: [-]has-synced failed: reason withheld Mar 13 12:54:24.984398 master-0 kubenswrapper[28149]: [+]process-running ok Mar 13 12:54:24.984398 master-0 kubenswrapper[28149]: healthz check failed Mar 13 12:54:24.984398 master-0 kubenswrapper[28149]: I0313 12:54:24.983371 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:54:25.984765 master-0 kubenswrapper[28149]: I0313 12:54:25.984711 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:54:25.989296 master-0 kubenswrapper[28149]: I0313 12:54:25.989255 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" Mar 13 12:54:29.078642 master-0 kubenswrapper[28149]: I0313 12:54:29.078568 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:54:29.079368 master-0 kubenswrapper[28149]: E0313 12:54:29.078771 28149 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:54:29.079368 master-0 kubenswrapper[28149]: E0313 12:54:29.078797 28149 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:54:29.079368 master-0 kubenswrapper[28149]: E0313 12:54:29.078919 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access podName:185a10f7-2a4b-4171-b10d-4614cb8671bd nodeName:}" failed. No retries permitted until 2026-03-13 12:55:01.078875301 +0000 UTC m=+74.732340470 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access") pod "installer-4-master-0" (UID: "185a10f7-2a4b-4171-b10d-4614cb8671bd") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 12:54:37.724160 master-0 kubenswrapper[28149]: I0313 12:54:37.723979 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-86477f577f-glgzr"] Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: E0313 12:54:37.724364 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3828446d-a3e3-412f-a0e7-7347b5de523a" containerName="installer" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: I0313 12:54:37.724389 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="3828446d-a3e3-412f-a0e7-7347b5de523a" containerName="installer" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: E0313 12:54:37.724429 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03479326-c13f-40bb-9ed2-580bb05917a7" containerName="installer" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: I0313 12:54:37.724437 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="03479326-c13f-40bb-9ed2-580bb05917a7" containerName="installer" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: E0313 12:54:37.724450 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3825c8-8381-4d19-b482-e9499a72a700" containerName="installer" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: I0313 12:54:37.724459 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3825c8-8381-4d19-b482-e9499a72a700" containerName="installer" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: E0313 12:54:37.724478 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: I0313 12:54:37.724486 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: E0313 12:54:37.724512 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: I0313 12:54:37.724519 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: E0313 12:54:37.724541 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: I0313 12:54:37.724549 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: E0313 12:54:37.724561 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e01de416-3de5-4357-a84e-f8eabb15a500" containerName="installer" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: I0313 12:54:37.724569 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="e01de416-3de5-4357-a84e-f8eabb15a500" containerName="installer" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: E0313 12:54:37.724582 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88bf0bf8-c0ee-454e-8d8b-592a6e796cfc" containerName="installer" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: I0313 12:54:37.724589 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="88bf0bf8-c0ee-454e-8d8b-592a6e796cfc" containerName="installer" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: E0313 12:54:37.724597 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: I0313 12:54:37.724605 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: E0313 12:54:37.724619 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfabb495-1707-4c3d-b00e-2f3b2976fb92" containerName="installer" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: I0313 12:54:37.724627 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfabb495-1707-4c3d-b00e-2f3b2976fb92" containerName="installer" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: E0313 12:54:37.724642 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185a10f7-2a4b-4171-b10d-4614cb8671bd" containerName="installer" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: I0313 12:54:37.724652 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="185a10f7-2a4b-4171-b10d-4614cb8671bd" containerName="installer" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: E0313 12:54:37.724666 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: I0313 12:54:37.724674 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: E0313 12:54:37.724685 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: I0313 12:54:37.724694 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: E0313 12:54:37.724705 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:54:37.724691 master-0 kubenswrapper[28149]: I0313 12:54:37.724713 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: E0313 12:54:37.724727 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager-recovery-controller" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.724735 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager-recovery-controller" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: E0313 12:54:37.724748 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ba330e-35ca-4d05-8641-a880bf30c0e7" containerName="assisted-installer-controller" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.724756 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ba330e-35ca-4d05-8641-a880bf30c0e7" containerName="assisted-installer-controller" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: E0313 12:54:37.724766 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.724774 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: E0313 12:54:37.724782 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager-cert-syncer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.724790 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager-cert-syncer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: E0313 12:54:37.724803 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="cluster-policy-controller" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.724810 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="cluster-policy-controller" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: E0313 12:54:37.724820 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1670a1d9-46a3-4d25-9dd1-43a08e2759c7" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.724828 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="1670a1d9-46a3-4d25-9dd1-43a08e2759c7" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: E0313 12:54:37.724838 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d2e134-62bb-4181-aa0a-22c9b9755b10" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.724846 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d2e134-62bb-4181-aa0a-22c9b9755b10" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: E0313 12:54:37.724857 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f8543a5-1639-4140-a18d-8b0c96821bae" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.724865 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f8543a5-1639-4140-a18d-8b0c96821bae" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725054 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="3828446d-a3e3-412f-a0e7-7347b5de523a" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725088 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="88bf0bf8-c0ee-454e-8d8b-592a6e796cfc" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725103 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="185a10f7-2a4b-4171-b10d-4614cb8671bd" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725118 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725129 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725159 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager-cert-syncer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725174 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3825c8-8381-4d19-b482-e9499a72a700" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725189 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725198 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfabb495-1707-4c3d-b00e-2f3b2976fb92" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725210 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725227 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="03479326-c13f-40bb-9ed2-580bb05917a7" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725240 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager-recovery-controller" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725252 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="kube-controller-manager" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725266 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725280 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="741a6830aaef63e92194dd05d0b4da3d" containerName="cluster-policy-controller" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725295 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ba330e-35ca-4d05-8641-a880bf30c0e7" containerName="assisted-installer-controller" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725307 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725320 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="e01de416-3de5-4357-a84e-f8eabb15a500" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725331 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f8543a5-1639-4140-a18d-8b0c96821bae" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725338 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d2e134-62bb-4181-aa0a-22c9b9755b10" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725352 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="1670a1d9-46a3-4d25-9dd1-43a08e2759c7" containerName="installer" Mar 13 12:54:37.725552 master-0 kubenswrapper[28149]: I0313 12:54:37.725364 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="76fe9cb6-ff3d-4bd9-a26d-dc8c9ce4a8aa" containerName="installer" Mar 13 12:54:37.726748 master-0 kubenswrapper[28149]: I0313 12:54:37.725943 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.735843 master-0 kubenswrapper[28149]: I0313 12:54:37.735812 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 13 12:54:37.736046 master-0 kubenswrapper[28149]: I0313 12:54:37.736014 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 13 12:54:37.736228 master-0 kubenswrapper[28149]: I0313 12:54:37.735953 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 13 12:54:37.736400 master-0 kubenswrapper[28149]: I0313 12:54:37.736386 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 13 12:54:37.736726 master-0 kubenswrapper[28149]: I0313 12:54:37.736715 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-z42c9" Mar 13 12:54:37.736935 master-0 kubenswrapper[28149]: I0313 12:54:37.736923 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 13 12:54:37.737083 master-0 kubenswrapper[28149]: I0313 12:54:37.735988 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 13 12:54:37.737290 master-0 kubenswrapper[28149]: I0313 12:54:37.737278 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 13 12:54:37.737505 master-0 kubenswrapper[28149]: I0313 12:54:37.736213 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 13 12:54:37.747194 master-0 kubenswrapper[28149]: I0313 12:54:37.746080 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 13 12:54:37.747194 master-0 kubenswrapper[28149]: I0313 12:54:37.746856 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 13 12:54:37.747194 master-0 kubenswrapper[28149]: I0313 12:54:37.746978 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 13 12:54:37.758941 master-0 kubenswrapper[28149]: I0313 12:54:37.758904 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-86477f577f-glgzr"] Mar 13 12:54:37.761212 master-0 kubenswrapper[28149]: I0313 12:54:37.761193 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 13 12:54:37.778682 master-0 kubenswrapper[28149]: I0313 12:54:37.777450 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-audit-policies\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.778682 master-0 kubenswrapper[28149]: I0313 12:54:37.777531 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8rk6\" (UniqueName: \"kubernetes.io/projected/28729487-1d7c-4837-961a-6cb084bf543f-kube-api-access-f8rk6\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.778682 master-0 kubenswrapper[28149]: I0313 12:54:37.777564 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-error\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.778682 master-0 kubenswrapper[28149]: I0313 12:54:37.777596 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.778682 master-0 kubenswrapper[28149]: I0313 12:54:37.777637 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-service-ca\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.778682 master-0 kubenswrapper[28149]: I0313 12:54:37.777674 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-login\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.778682 master-0 kubenswrapper[28149]: I0313 12:54:37.777712 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28729487-1d7c-4837-961a-6cb084bf543f-audit-dir\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.778682 master-0 kubenswrapper[28149]: I0313 12:54:37.777750 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-session\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.778682 master-0 kubenswrapper[28149]: I0313 12:54:37.777780 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-router-certs\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.778682 master-0 kubenswrapper[28149]: I0313 12:54:37.777823 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.778682 master-0 kubenswrapper[28149]: I0313 12:54:37.777867 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.778682 master-0 kubenswrapper[28149]: I0313 12:54:37.777897 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.778682 master-0 kubenswrapper[28149]: I0313 12:54:37.777923 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.785513 master-0 kubenswrapper[28149]: I0313 12:54:37.785475 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 13 12:54:37.879221 master-0 kubenswrapper[28149]: I0313 12:54:37.879161 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-session\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.879440 master-0 kubenswrapper[28149]: I0313 12:54:37.879233 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-router-certs\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.879440 master-0 kubenswrapper[28149]: I0313 12:54:37.879262 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.879440 master-0 kubenswrapper[28149]: I0313 12:54:37.879295 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.879440 master-0 kubenswrapper[28149]: I0313 12:54:37.879321 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.879440 master-0 kubenswrapper[28149]: I0313 12:54:37.879357 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.879440 master-0 kubenswrapper[28149]: I0313 12:54:37.879387 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-audit-policies\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.879440 master-0 kubenswrapper[28149]: I0313 12:54:37.879415 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8rk6\" (UniqueName: \"kubernetes.io/projected/28729487-1d7c-4837-961a-6cb084bf543f-kube-api-access-f8rk6\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.879440 master-0 kubenswrapper[28149]: I0313 12:54:37.879438 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-error\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.879693 master-0 kubenswrapper[28149]: I0313 12:54:37.879462 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.879693 master-0 kubenswrapper[28149]: I0313 12:54:37.879482 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-service-ca\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.879693 master-0 kubenswrapper[28149]: I0313 12:54:37.879516 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-login\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.879693 master-0 kubenswrapper[28149]: I0313 12:54:37.879556 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28729487-1d7c-4837-961a-6cb084bf543f-audit-dir\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.879693 master-0 kubenswrapper[28149]: I0313 12:54:37.879672 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28729487-1d7c-4837-961a-6cb084bf543f-audit-dir\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.880341 master-0 kubenswrapper[28149]: I0313 12:54:37.880309 28149 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 13 12:54:37.882066 master-0 kubenswrapper[28149]: I0313 12:54:37.882021 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-service-ca\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.882315 master-0 kubenswrapper[28149]: I0313 12:54:37.882288 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-audit-policies\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.882757 master-0 kubenswrapper[28149]: I0313 12:54:37.882725 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.885629 master-0 kubenswrapper[28149]: I0313 12:54:37.884146 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-login\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.885629 master-0 kubenswrapper[28149]: I0313 12:54:37.884233 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.885629 master-0 kubenswrapper[28149]: I0313 12:54:37.884251 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.885629 master-0 kubenswrapper[28149]: I0313 12:54:37.884290 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-error\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.888181 master-0 kubenswrapper[28149]: I0313 12:54:37.887768 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.888181 master-0 kubenswrapper[28149]: I0313 12:54:37.887911 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-router-certs\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.902709 master-0 kubenswrapper[28149]: I0313 12:54:37.902660 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.902899 master-0 kubenswrapper[28149]: I0313 12:54:37.902783 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-session\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:37.906710 master-0 kubenswrapper[28149]: I0313 12:54:37.906672 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8rk6\" (UniqueName: \"kubernetes.io/projected/28729487-1d7c-4837-961a-6cb084bf543f-kube-api-access-f8rk6\") pod \"oauth-openshift-86477f577f-glgzr\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:38.083281 master-0 kubenswrapper[28149]: I0313 12:54:38.061468 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:39.307393 master-0 kubenswrapper[28149]: I0313 12:54:39.307353 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-86477f577f-glgzr"] Mar 13 12:54:39.316621 master-0 kubenswrapper[28149]: W0313 12:54:39.316567 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28729487_1d7c_4837_961a_6cb084bf543f.slice/crio-b78ad4cc1e97ab6e52572a4afa95b4ec08ab81f6152613180584943e40583333 WatchSource:0}: Error finding container b78ad4cc1e97ab6e52572a4afa95b4ec08ab81f6152613180584943e40583333: Status 404 returned error can't find the container with id b78ad4cc1e97ab6e52572a4afa95b4ec08ab81f6152613180584943e40583333 Mar 13 12:54:39.319212 master-0 kubenswrapper[28149]: I0313 12:54:39.318880 28149 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:54:39.597566 master-0 kubenswrapper[28149]: I0313 12:54:39.597430 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" event={"ID":"28729487-1d7c-4837-961a-6cb084bf543f","Type":"ContainerStarted","Data":"b78ad4cc1e97ab6e52572a4afa95b4ec08ab81f6152613180584943e40583333"} Mar 13 12:54:42.663330 master-0 kubenswrapper[28149]: I0313 12:54:42.663262 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" event={"ID":"28729487-1d7c-4837-961a-6cb084bf543f","Type":"ContainerStarted","Data":"b0611dc8723d0311cc21d4c09c05c710b9bc164a943da5cca522b147cfaa1608"} Mar 13 12:54:42.664342 master-0 kubenswrapper[28149]: I0313 12:54:42.664294 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:42.673476 master-0 kubenswrapper[28149]: I0313 12:54:42.673436 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:54:42.701062 master-0 kubenswrapper[28149]: I0313 12:54:42.695647 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" podStartSLOduration=3.374938288 podStartE2EDuration="5.695610805s" podCreationTimestamp="2026-03-13 12:54:37 +0000 UTC" firstStartedPulling="2026-03-13 12:54:39.31875047 +0000 UTC m=+52.972215629" lastFinishedPulling="2026-03-13 12:54:41.639422987 +0000 UTC m=+55.292888146" observedRunningTime="2026-03-13 12:54:42.69129303 +0000 UTC m=+56.344758219" watchObservedRunningTime="2026-03-13 12:54:42.695610805 +0000 UTC m=+56.349075994" Mar 13 12:54:44.136229 master-0 kubenswrapper[28149]: I0313 12:54:44.136169 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 13 12:54:44.137915 master-0 kubenswrapper[28149]: I0313 12:54:44.137330 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:54:44.140590 master-0 kubenswrapper[28149]: I0313 12:54:44.140495 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 13 12:54:44.140852 master-0 kubenswrapper[28149]: I0313 12:54:44.140542 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 12:54:44.143368 master-0 kubenswrapper[28149]: I0313 12:54:44.143323 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-x9hp4" Mar 13 12:54:44.171803 master-0 kubenswrapper[28149]: I0313 12:54:44.171201 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/488104cb-a17e-4a12-824b-25aefadae86c-kube-api-access\") pod \"installer-5-master-0\" (UID: \"488104cb-a17e-4a12-824b-25aefadae86c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:54:44.172091 master-0 kubenswrapper[28149]: I0313 12:54:44.171963 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/488104cb-a17e-4a12-824b-25aefadae86c-var-lock\") pod \"installer-5-master-0\" (UID: \"488104cb-a17e-4a12-824b-25aefadae86c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:54:44.172091 master-0 kubenswrapper[28149]: I0313 12:54:44.172010 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/488104cb-a17e-4a12-824b-25aefadae86c-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"488104cb-a17e-4a12-824b-25aefadae86c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:54:44.272688 master-0 kubenswrapper[28149]: I0313 12:54:44.272625 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/488104cb-a17e-4a12-824b-25aefadae86c-var-lock\") pod \"installer-5-master-0\" (UID: \"488104cb-a17e-4a12-824b-25aefadae86c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:54:44.272688 master-0 kubenswrapper[28149]: I0313 12:54:44.272673 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/488104cb-a17e-4a12-824b-25aefadae86c-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"488104cb-a17e-4a12-824b-25aefadae86c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:54:44.272965 master-0 kubenswrapper[28149]: I0313 12:54:44.272779 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/488104cb-a17e-4a12-824b-25aefadae86c-kube-api-access\") pod \"installer-5-master-0\" (UID: \"488104cb-a17e-4a12-824b-25aefadae86c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:54:44.272965 master-0 kubenswrapper[28149]: I0313 12:54:44.272788 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/488104cb-a17e-4a12-824b-25aefadae86c-var-lock\") pod \"installer-5-master-0\" (UID: \"488104cb-a17e-4a12-824b-25aefadae86c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:54:44.273069 master-0 kubenswrapper[28149]: I0313 12:54:44.272983 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/488104cb-a17e-4a12-824b-25aefadae86c-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"488104cb-a17e-4a12-824b-25aefadae86c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:54:44.289258 master-0 kubenswrapper[28149]: I0313 12:54:44.289204 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/488104cb-a17e-4a12-824b-25aefadae86c-kube-api-access\") pod \"installer-5-master-0\" (UID: \"488104cb-a17e-4a12-824b-25aefadae86c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:54:44.465043 master-0 kubenswrapper[28149]: I0313 12:54:44.464860 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:54:45.348545 master-0 kubenswrapper[28149]: I0313 12:54:45.345629 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 13 12:54:45.354931 master-0 kubenswrapper[28149]: W0313 12:54:45.354875 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod488104cb_a17e_4a12_824b_25aefadae86c.slice/crio-ac4adefee61a7bdc7107759e85ff59c2b08d2d901f9d5e8c66d9c3b86b8d9018 WatchSource:0}: Error finding container ac4adefee61a7bdc7107759e85ff59c2b08d2d901f9d5e8c66d9c3b86b8d9018: Status 404 returned error can't find the container with id ac4adefee61a7bdc7107759e85ff59c2b08d2d901f9d5e8c66d9c3b86b8d9018 Mar 13 12:54:45.754748 master-0 kubenswrapper[28149]: I0313 12:54:45.754689 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"488104cb-a17e-4a12-824b-25aefadae86c","Type":"ContainerStarted","Data":"6ae7eb80f1a9eeed6dd3d90b3b67676578f8e9bd553e67bb6f4494ba95b102c6"} Mar 13 12:54:45.755096 master-0 kubenswrapper[28149]: I0313 12:54:45.754762 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"488104cb-a17e-4a12-824b-25aefadae86c","Type":"ContainerStarted","Data":"ac4adefee61a7bdc7107759e85ff59c2b08d2d901f9d5e8c66d9c3b86b8d9018"} Mar 13 12:54:45.782075 master-0 kubenswrapper[28149]: I0313 12:54:45.781965 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=1.781940232 podStartE2EDuration="1.781940232s" podCreationTimestamp="2026-03-13 12:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:54:45.774794586 +0000 UTC m=+59.428259765" watchObservedRunningTime="2026-03-13 12:54:45.781940232 +0000 UTC m=+59.435405411" Mar 13 12:54:59.319065 master-0 kubenswrapper[28149]: I0313 12:54:59.318998 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 13 12:54:59.324338 master-0 kubenswrapper[28149]: I0313 12:54:59.319435 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-5-master-0" podUID="488104cb-a17e-4a12-824b-25aefadae86c" containerName="installer" containerID="cri-o://6ae7eb80f1a9eeed6dd3d90b3b67676578f8e9bd553e67bb6f4494ba95b102c6" gracePeriod=30 Mar 13 12:55:01.134478 master-0 kubenswrapper[28149]: I0313 12:55:01.134335 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:55:01.138689 master-0 kubenswrapper[28149]: I0313 12:55:01.138629 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:55:01.236100 master-0 kubenswrapper[28149]: I0313 12:55:01.236001 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") pod \"185a10f7-2a4b-4171-b10d-4614cb8671bd\" (UID: \"185a10f7-2a4b-4171-b10d-4614cb8671bd\") " Mar 13 12:55:01.240959 master-0 kubenswrapper[28149]: I0313 12:55:01.240842 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "185a10f7-2a4b-4171-b10d-4614cb8671bd" (UID: "185a10f7-2a4b-4171-b10d-4614cb8671bd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:55:01.337677 master-0 kubenswrapper[28149]: I0313 12:55:01.337590 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/185a10f7-2a4b-4171-b10d-4614cb8671bd-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:55:02.398035 master-0 kubenswrapper[28149]: I0313 12:55:02.394570 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 13 12:55:02.398035 master-0 kubenswrapper[28149]: I0313 12:55:02.395716 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 12:55:02.408506 master-0 kubenswrapper[28149]: I0313 12:55:02.405439 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 13 12:55:02.587380 master-0 kubenswrapper[28149]: I0313 12:55:02.587297 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/316953f3-5e6c-4aaf-802d-85959f7d7760-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"316953f3-5e6c-4aaf-802d-85959f7d7760\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 12:55:02.587380 master-0 kubenswrapper[28149]: I0313 12:55:02.587368 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/316953f3-5e6c-4aaf-802d-85959f7d7760-kube-api-access\") pod \"installer-6-master-0\" (UID: \"316953f3-5e6c-4aaf-802d-85959f7d7760\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 12:55:02.587665 master-0 kubenswrapper[28149]: I0313 12:55:02.587444 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/316953f3-5e6c-4aaf-802d-85959f7d7760-var-lock\") pod \"installer-6-master-0\" (UID: \"316953f3-5e6c-4aaf-802d-85959f7d7760\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 12:55:02.688801 master-0 kubenswrapper[28149]: I0313 12:55:02.688632 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/316953f3-5e6c-4aaf-802d-85959f7d7760-var-lock\") pod \"installer-6-master-0\" (UID: \"316953f3-5e6c-4aaf-802d-85959f7d7760\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 12:55:02.689049 master-0 kubenswrapper[28149]: I0313 12:55:02.688972 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/316953f3-5e6c-4aaf-802d-85959f7d7760-var-lock\") pod \"installer-6-master-0\" (UID: \"316953f3-5e6c-4aaf-802d-85959f7d7760\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 12:55:02.689208 master-0 kubenswrapper[28149]: I0313 12:55:02.689188 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/316953f3-5e6c-4aaf-802d-85959f7d7760-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"316953f3-5e6c-4aaf-802d-85959f7d7760\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 12:55:02.689304 master-0 kubenswrapper[28149]: I0313 12:55:02.689220 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/316953f3-5e6c-4aaf-802d-85959f7d7760-kube-api-access\") pod \"installer-6-master-0\" (UID: \"316953f3-5e6c-4aaf-802d-85959f7d7760\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 12:55:02.689372 master-0 kubenswrapper[28149]: I0313 12:55:02.689292 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/316953f3-5e6c-4aaf-802d-85959f7d7760-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"316953f3-5e6c-4aaf-802d-85959f7d7760\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 12:55:02.710597 master-0 kubenswrapper[28149]: I0313 12:55:02.710534 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/316953f3-5e6c-4aaf-802d-85959f7d7760-kube-api-access\") pod \"installer-6-master-0\" (UID: \"316953f3-5e6c-4aaf-802d-85959f7d7760\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 12:55:02.716361 master-0 kubenswrapper[28149]: I0313 12:55:02.716310 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 12:55:03.200509 master-0 kubenswrapper[28149]: I0313 12:55:03.200452 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 13 12:55:03.904888 master-0 kubenswrapper[28149]: I0313 12:55:03.904738 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"316953f3-5e6c-4aaf-802d-85959f7d7760","Type":"ContainerStarted","Data":"2a696bbc93be8821608da3fbec10d6e34a22ab7edf48aaab160a9a0deb5f590b"} Mar 13 12:55:03.904888 master-0 kubenswrapper[28149]: I0313 12:55:03.904826 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"316953f3-5e6c-4aaf-802d-85959f7d7760","Type":"ContainerStarted","Data":"46ebc7ccc75ee9021060034d3cd0402817d0ce910ad88ed144681d72cd2e24ac"} Mar 13 12:55:03.942393 master-0 kubenswrapper[28149]: I0313 12:55:03.939328 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=1.939300318 podStartE2EDuration="1.939300318s" podCreationTimestamp="2026-03-13 12:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:55:03.933594694 +0000 UTC m=+77.587059873" watchObservedRunningTime="2026-03-13 12:55:03.939300318 +0000 UTC m=+77.592765517" Mar 13 12:55:17.155779 master-0 kubenswrapper[28149]: I0313 12:55:17.155704 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_488104cb-a17e-4a12-824b-25aefadae86c/installer/0.log" Mar 13 12:55:17.156512 master-0 kubenswrapper[28149]: I0313 12:55:17.155861 28149 generic.go:334] "Generic (PLEG): container finished" podID="488104cb-a17e-4a12-824b-25aefadae86c" containerID="6ae7eb80f1a9eeed6dd3d90b3b67676578f8e9bd553e67bb6f4494ba95b102c6" exitCode=1 Mar 13 12:55:17.156512 master-0 kubenswrapper[28149]: I0313 12:55:17.155933 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"488104cb-a17e-4a12-824b-25aefadae86c","Type":"ContainerDied","Data":"6ae7eb80f1a9eeed6dd3d90b3b67676578f8e9bd553e67bb6f4494ba95b102c6"} Mar 13 12:55:17.764794 master-0 kubenswrapper[28149]: I0313 12:55:17.764698 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_488104cb-a17e-4a12-824b-25aefadae86c/installer/0.log" Mar 13 12:55:17.764794 master-0 kubenswrapper[28149]: I0313 12:55:17.764776 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:55:17.821966 master-0 kubenswrapper[28149]: I0313 12:55:17.821864 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/488104cb-a17e-4a12-824b-25aefadae86c-kubelet-dir\") pod \"488104cb-a17e-4a12-824b-25aefadae86c\" (UID: \"488104cb-a17e-4a12-824b-25aefadae86c\") " Mar 13 12:55:17.822214 master-0 kubenswrapper[28149]: I0313 12:55:17.821984 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/488104cb-a17e-4a12-824b-25aefadae86c-var-lock\") pod \"488104cb-a17e-4a12-824b-25aefadae86c\" (UID: \"488104cb-a17e-4a12-824b-25aefadae86c\") " Mar 13 12:55:17.822214 master-0 kubenswrapper[28149]: I0313 12:55:17.822013 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/488104cb-a17e-4a12-824b-25aefadae86c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "488104cb-a17e-4a12-824b-25aefadae86c" (UID: "488104cb-a17e-4a12-824b-25aefadae86c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:55:17.822214 master-0 kubenswrapper[28149]: I0313 12:55:17.822084 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/488104cb-a17e-4a12-824b-25aefadae86c-kube-api-access\") pod \"488104cb-a17e-4a12-824b-25aefadae86c\" (UID: \"488104cb-a17e-4a12-824b-25aefadae86c\") " Mar 13 12:55:17.822214 master-0 kubenswrapper[28149]: I0313 12:55:17.822163 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/488104cb-a17e-4a12-824b-25aefadae86c-var-lock" (OuterVolumeSpecName: "var-lock") pod "488104cb-a17e-4a12-824b-25aefadae86c" (UID: "488104cb-a17e-4a12-824b-25aefadae86c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:55:17.823865 master-0 kubenswrapper[28149]: I0313 12:55:17.823833 28149 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/488104cb-a17e-4a12-824b-25aefadae86c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:55:17.823865 master-0 kubenswrapper[28149]: I0313 12:55:17.823864 28149 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/488104cb-a17e-4a12-824b-25aefadae86c-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:55:17.825236 master-0 kubenswrapper[28149]: I0313 12:55:17.825200 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/488104cb-a17e-4a12-824b-25aefadae86c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "488104cb-a17e-4a12-824b-25aefadae86c" (UID: "488104cb-a17e-4a12-824b-25aefadae86c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:55:17.925370 master-0 kubenswrapper[28149]: I0313 12:55:17.925303 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/488104cb-a17e-4a12-824b-25aefadae86c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:55:18.166842 master-0 kubenswrapper[28149]: I0313 12:55:18.166806 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_488104cb-a17e-4a12-824b-25aefadae86c/installer/0.log" Mar 13 12:55:18.167398 master-0 kubenswrapper[28149]: I0313 12:55:18.166888 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"488104cb-a17e-4a12-824b-25aefadae86c","Type":"ContainerDied","Data":"ac4adefee61a7bdc7107759e85ff59c2b08d2d901f9d5e8c66d9c3b86b8d9018"} Mar 13 12:55:18.167398 master-0 kubenswrapper[28149]: I0313 12:55:18.166973 28149 scope.go:117] "RemoveContainer" containerID="6ae7eb80f1a9eeed6dd3d90b3b67676578f8e9bd553e67bb6f4494ba95b102c6" Mar 13 12:55:18.167398 master-0 kubenswrapper[28149]: I0313 12:55:18.166986 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:55:18.216994 master-0 kubenswrapper[28149]: I0313 12:55:18.216943 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 13 12:55:18.221591 master-0 kubenswrapper[28149]: I0313 12:55:18.221448 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 13 12:55:18.695177 master-0 kubenswrapper[28149]: I0313 12:55:18.695115 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="488104cb-a17e-4a12-824b-25aefadae86c" path="/var/lib/kubelet/pods/488104cb-a17e-4a12-824b-25aefadae86c/volumes" Mar 13 12:55:46.739728 master-0 kubenswrapper[28149]: I0313 12:55:46.739688 28149 scope.go:117] "RemoveContainer" containerID="70ca563a3bda7cc49d130c71d95d6db991e5796cde50a910c3e63400c9e5a03b" Mar 13 12:56:01.758649 master-0 kubenswrapper[28149]: I0313 12:56:01.758579 28149 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:56:01.759412 master-0 kubenswrapper[28149]: I0313 12:56:01.759053 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" containerID="cri-o://49a3e19d955348f7e8d6cddcf11b1118a4c6f32a3b5d7a34d5989aaa73b1262c" gracePeriod=15 Mar 13 12:56:01.759412 master-0 kubenswrapper[28149]: I0313 12:56:01.759121 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" containerID="cri-o://c16c28a17a2035273ad3cbe98ed9a765284a80f578c8eb0748ccdf8c0dbcc66a" gracePeriod=15 Mar 13 12:56:01.759412 master-0 kubenswrapper[28149]: I0313 12:56:01.759057 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://52264b4378a4f3ba83334945450ce98ac9bedab1c6c9485cb885bc9488d52471" gracePeriod=15 Mar 13 12:56:01.759573 master-0 kubenswrapper[28149]: I0313 12:56:01.759055 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://8ea4f4f1bc69f85c977580ddac21514a71e7c8a91de12b17cbd00d640490e4d3" gracePeriod=15 Mar 13 12:56:01.760111 master-0 kubenswrapper[28149]: I0313 12:56:01.760072 28149 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:56:01.760488 master-0 kubenswrapper[28149]: E0313 12:56:01.760453 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" Mar 13 12:56:01.760488 master-0 kubenswrapper[28149]: I0313 12:56:01.760478 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" Mar 13 12:56:01.760592 master-0 kubenswrapper[28149]: E0313 12:56:01.760512 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" Mar 13 12:56:01.760592 master-0 kubenswrapper[28149]: I0313 12:56:01.760520 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" Mar 13 12:56:01.760592 master-0 kubenswrapper[28149]: E0313 12:56:01.760531 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="setup" Mar 13 12:56:01.760592 master-0 kubenswrapper[28149]: I0313 12:56:01.760537 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="setup" Mar 13 12:56:01.760592 master-0 kubenswrapper[28149]: E0313 12:56:01.760566 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 13 12:56:01.760592 master-0 kubenswrapper[28149]: I0313 12:56:01.760572 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 13 12:56:01.760592 master-0 kubenswrapper[28149]: E0313 12:56:01.760585 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488104cb-a17e-4a12-824b-25aefadae86c" containerName="installer" Mar 13 12:56:01.760592 master-0 kubenswrapper[28149]: I0313 12:56:01.760592 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="488104cb-a17e-4a12-824b-25aefadae86c" containerName="installer" Mar 13 12:56:01.760923 master-0 kubenswrapper[28149]: E0313 12:56:01.760619 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 13 12:56:01.760923 master-0 kubenswrapper[28149]: I0313 12:56:01.760626 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 13 12:56:01.760923 master-0 kubenswrapper[28149]: E0313 12:56:01.760636 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 12:56:01.760923 master-0 kubenswrapper[28149]: I0313 12:56:01.760644 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 12:56:01.760923 master-0 kubenswrapper[28149]: E0313 12:56:01.760658 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" Mar 13 12:56:01.760923 master-0 kubenswrapper[28149]: I0313 12:56:01.760666 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" Mar 13 12:56:01.760923 master-0 kubenswrapper[28149]: I0313 12:56:01.760799 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" Mar 13 12:56:01.760923 master-0 kubenswrapper[28149]: I0313 12:56:01.760841 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" Mar 13 12:56:01.760923 master-0 kubenswrapper[28149]: I0313 12:56:01.760879 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="488104cb-a17e-4a12-824b-25aefadae86c" containerName="installer" Mar 13 12:56:01.760923 master-0 kubenswrapper[28149]: I0313 12:56:01.760889 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" Mar 13 12:56:01.760923 master-0 kubenswrapper[28149]: I0313 12:56:01.760902 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 13 12:56:01.760923 master-0 kubenswrapper[28149]: I0313 12:56:01.760915 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 13 12:56:01.760923 master-0 kubenswrapper[28149]: I0313 12:56:01.760932 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 12:56:01.763034 master-0 kubenswrapper[28149]: I0313 12:56:01.762682 28149 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:56:01.763385 master-0 kubenswrapper[28149]: I0313 12:56:01.763337 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:01.763838 master-0 kubenswrapper[28149]: I0313 12:56:01.759726 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b498f079133d2a2077770b172efd3507414d1897ced1774403305339c6337d85" gracePeriod=15 Mar 13 12:56:01.764165 master-0 kubenswrapper[28149]: I0313 12:56:01.764101 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:01.764249 master-0 kubenswrapper[28149]: I0313 12:56:01.764175 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:01.764299 master-0 kubenswrapper[28149]: I0313 12:56:01.764254 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:01.773516 master-0 kubenswrapper[28149]: I0313 12:56:01.773471 28149 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="48512e02022680c9d90092634f0fc146" podUID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" Mar 13 12:56:01.817867 master-0 kubenswrapper[28149]: I0313 12:56:01.816597 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:56:01.933209 master-0 kubenswrapper[28149]: I0313 12:56:01.929261 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:01.933209 master-0 kubenswrapper[28149]: I0313 12:56:01.929353 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:01.933209 master-0 kubenswrapper[28149]: I0313 12:56:01.929383 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:01.933209 master-0 kubenswrapper[28149]: I0313 12:56:01.929455 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:01.933209 master-0 kubenswrapper[28149]: I0313 12:56:01.929512 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:01.933209 master-0 kubenswrapper[28149]: I0313 12:56:01.929583 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:01.933209 master-0 kubenswrapper[28149]: I0313 12:56:01.929628 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:01.933209 master-0 kubenswrapper[28149]: I0313 12:56:01.929669 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:01.933209 master-0 kubenswrapper[28149]: I0313 12:56:01.929683 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:01.933209 master-0 kubenswrapper[28149]: I0313 12:56:01.929632 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:01.933209 master-0 kubenswrapper[28149]: I0313 12:56:01.929752 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:02.030826 master-0 kubenswrapper[28149]: I0313 12:56:02.030475 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:02.030826 master-0 kubenswrapper[28149]: I0313 12:56:02.030544 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:02.030826 master-0 kubenswrapper[28149]: I0313 12:56:02.030572 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:02.030826 master-0 kubenswrapper[28149]: I0313 12:56:02.030612 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:02.030826 master-0 kubenswrapper[28149]: I0313 12:56:02.030643 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:02.030826 master-0 kubenswrapper[28149]: I0313 12:56:02.030729 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:02.030826 master-0 kubenswrapper[28149]: I0313 12:56:02.030781 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:02.030826 master-0 kubenswrapper[28149]: I0313 12:56:02.030799 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:02.030826 master-0 kubenswrapper[28149]: I0313 12:56:02.030826 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:02.031584 master-0 kubenswrapper[28149]: I0313 12:56:02.030820 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:02.115652 master-0 kubenswrapper[28149]: I0313 12:56:02.115526 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:02.147799 master-0 kubenswrapper[28149]: W0313 12:56:02.147734 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb275ed7e9ce09d69a66613ca3ae3d89e.slice/crio-6f5c469d6ea27e3b83a5488c3a051dc9a7b9fe72fe7b499388bad14332d4ea62 WatchSource:0}: Error finding container 6f5c469d6ea27e3b83a5488c3a051dc9a7b9fe72fe7b499388bad14332d4ea62: Status 404 returned error can't find the container with id 6f5c469d6ea27e3b83a5488c3a051dc9a7b9fe72fe7b499388bad14332d4ea62 Mar 13 12:56:02.151612 master-0 kubenswrapper[28149]: E0313 12:56:02.151423 28149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c67df535043cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:b275ed7e9ce09d69a66613ca3ae3d89e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:56:02.150269903 +0000 UTC m=+135.803735062,LastTimestamp:2026-03-13 12:56:02.150269903 +0000 UTC m=+135.803735062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:56:02.745323 master-0 kubenswrapper[28149]: I0313 12:56:02.744672 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"b275ed7e9ce09d69a66613ca3ae3d89e","Type":"ContainerStarted","Data":"3308b8ce530f3a4e5f17e073b604a57926c719321024108b3eb887a2ba16cf6e"} Mar 13 12:56:02.745323 master-0 kubenswrapper[28149]: I0313 12:56:02.745255 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"b275ed7e9ce09d69a66613ca3ae3d89e","Type":"ContainerStarted","Data":"6f5c469d6ea27e3b83a5488c3a051dc9a7b9fe72fe7b499388bad14332d4ea62"} Mar 13 12:56:02.746057 master-0 kubenswrapper[28149]: I0313 12:56:02.746001 28149 status_manager.go:851] "Failed to get status for pod" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:02.747407 master-0 kubenswrapper[28149]: I0313 12:56:02.747343 28149 generic.go:334] "Generic (PLEG): container finished" podID="316953f3-5e6c-4aaf-802d-85959f7d7760" containerID="2a696bbc93be8821608da3fbec10d6e34a22ab7edf48aaab160a9a0deb5f590b" exitCode=0 Mar 13 12:56:02.747534 master-0 kubenswrapper[28149]: I0313 12:56:02.747414 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"316953f3-5e6c-4aaf-802d-85959f7d7760","Type":"ContainerDied","Data":"2a696bbc93be8821608da3fbec10d6e34a22ab7edf48aaab160a9a0deb5f590b"} Mar 13 12:56:02.748441 master-0 kubenswrapper[28149]: I0313 12:56:02.748395 28149 status_manager.go:851] "Failed to get status for pod" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:02.749256 master-0 kubenswrapper[28149]: I0313 12:56:02.749206 28149 status_manager.go:851] "Failed to get status for pod" podUID="316953f3-5e6c-4aaf-802d-85959f7d7760" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:02.749652 master-0 kubenswrapper[28149]: I0313 12:56:02.749630 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-check-endpoints/0.log" Mar 13 12:56:02.750871 master-0 kubenswrapper[28149]: I0313 12:56:02.750837 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 13 12:56:02.751756 master-0 kubenswrapper[28149]: I0313 12:56:02.751728 28149 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="49a3e19d955348f7e8d6cddcf11b1118a4c6f32a3b5d7a34d5989aaa73b1262c" exitCode=0 Mar 13 12:56:02.751756 master-0 kubenswrapper[28149]: I0313 12:56:02.751752 28149 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="52264b4378a4f3ba83334945450ce98ac9bedab1c6c9485cb885bc9488d52471" exitCode=0 Mar 13 12:56:02.751756 master-0 kubenswrapper[28149]: I0313 12:56:02.751760 28149 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="8ea4f4f1bc69f85c977580ddac21514a71e7c8a91de12b17cbd00d640490e4d3" exitCode=0 Mar 13 12:56:02.751971 master-0 kubenswrapper[28149]: I0313 12:56:02.751769 28149 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="b498f079133d2a2077770b172efd3507414d1897ced1774403305339c6337d85" exitCode=2 Mar 13 12:56:02.751971 master-0 kubenswrapper[28149]: I0313 12:56:02.751832 28149 scope.go:117] "RemoveContainer" containerID="22eebd4722aca51c26c5c5c4b620534c95d14ee25cb5dca7baa2946eaaa18f49" Mar 13 12:56:03.763760 master-0 kubenswrapper[28149]: I0313 12:56:03.763710 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 13 12:56:03.831218 master-0 kubenswrapper[28149]: E0313 12:56:03.829404 28149 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:03.831218 master-0 kubenswrapper[28149]: E0313 12:56:03.829826 28149 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:03.831218 master-0 kubenswrapper[28149]: E0313 12:56:03.830249 28149 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:03.831218 master-0 kubenswrapper[28149]: E0313 12:56:03.830807 28149 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:03.831218 master-0 kubenswrapper[28149]: E0313 12:56:03.831202 28149 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:03.831218 master-0 kubenswrapper[28149]: I0313 12:56:03.831236 28149 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 12:56:03.832540 master-0 kubenswrapper[28149]: E0313 12:56:03.832498 28149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 13 12:56:04.033640 master-0 kubenswrapper[28149]: E0313 12:56:04.033577 28149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 13 12:56:04.294087 master-0 kubenswrapper[28149]: I0313 12:56:04.293888 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 12:56:04.294922 master-0 kubenswrapper[28149]: I0313 12:56:04.294874 28149 status_manager.go:851] "Failed to get status for pod" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:04.295633 master-0 kubenswrapper[28149]: I0313 12:56:04.295589 28149 status_manager.go:851] "Failed to get status for pod" podUID="316953f3-5e6c-4aaf-802d-85959f7d7760" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:04.435917 master-0 kubenswrapper[28149]: E0313 12:56:04.435775 28149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 13 12:56:04.561178 master-0 kubenswrapper[28149]: I0313 12:56:04.561050 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/316953f3-5e6c-4aaf-802d-85959f7d7760-var-lock\") pod \"316953f3-5e6c-4aaf-802d-85959f7d7760\" (UID: \"316953f3-5e6c-4aaf-802d-85959f7d7760\") " Mar 13 12:56:04.561178 master-0 kubenswrapper[28149]: I0313 12:56:04.561154 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/316953f3-5e6c-4aaf-802d-85959f7d7760-kube-api-access\") pod \"316953f3-5e6c-4aaf-802d-85959f7d7760\" (UID: \"316953f3-5e6c-4aaf-802d-85959f7d7760\") " Mar 13 12:56:04.561422 master-0 kubenswrapper[28149]: I0313 12:56:04.561233 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/316953f3-5e6c-4aaf-802d-85959f7d7760-kubelet-dir\") pod \"316953f3-5e6c-4aaf-802d-85959f7d7760\" (UID: \"316953f3-5e6c-4aaf-802d-85959f7d7760\") " Mar 13 12:56:04.561422 master-0 kubenswrapper[28149]: I0313 12:56:04.561401 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/316953f3-5e6c-4aaf-802d-85959f7d7760-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "316953f3-5e6c-4aaf-802d-85959f7d7760" (UID: "316953f3-5e6c-4aaf-802d-85959f7d7760"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:04.561515 master-0 kubenswrapper[28149]: I0313 12:56:04.561457 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/316953f3-5e6c-4aaf-802d-85959f7d7760-var-lock" (OuterVolumeSpecName: "var-lock") pod "316953f3-5e6c-4aaf-802d-85959f7d7760" (UID: "316953f3-5e6c-4aaf-802d-85959f7d7760"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:04.561885 master-0 kubenswrapper[28149]: I0313 12:56:04.561852 28149 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/316953f3-5e6c-4aaf-802d-85959f7d7760-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:04.566156 master-0 kubenswrapper[28149]: I0313 12:56:04.564416 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316953f3-5e6c-4aaf-802d-85959f7d7760-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "316953f3-5e6c-4aaf-802d-85959f7d7760" (UID: "316953f3-5e6c-4aaf-802d-85959f7d7760"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:56:04.662705 master-0 kubenswrapper[28149]: I0313 12:56:04.662636 28149 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/316953f3-5e6c-4aaf-802d-85959f7d7760-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:04.662705 master-0 kubenswrapper[28149]: I0313 12:56:04.662690 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/316953f3-5e6c-4aaf-802d-85959f7d7760-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:04.748368 master-0 kubenswrapper[28149]: I0313 12:56:04.748282 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 13 12:56:04.749356 master-0 kubenswrapper[28149]: I0313 12:56:04.749317 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:04.750345 master-0 kubenswrapper[28149]: I0313 12:56:04.750298 28149 status_manager.go:851] "Failed to get status for pod" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:04.751040 master-0 kubenswrapper[28149]: I0313 12:56:04.751007 28149 status_manager.go:851] "Failed to get status for pod" podUID="316953f3-5e6c-4aaf-802d-85959f7d7760" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:04.751957 master-0 kubenswrapper[28149]: I0313 12:56:04.751899 28149 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:04.763128 master-0 kubenswrapper[28149]: I0313 12:56:04.763071 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"48512e02022680c9d90092634f0fc146\" (UID: \"48512e02022680c9d90092634f0fc146\") " Mar 13 12:56:04.763319 master-0 kubenswrapper[28149]: I0313 12:56:04.763185 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "48512e02022680c9d90092634f0fc146" (UID: "48512e02022680c9d90092634f0fc146"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:04.763319 master-0 kubenswrapper[28149]: I0313 12:56:04.763198 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"48512e02022680c9d90092634f0fc146\" (UID: \"48512e02022680c9d90092634f0fc146\") " Mar 13 12:56:04.763319 master-0 kubenswrapper[28149]: I0313 12:56:04.763248 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "48512e02022680c9d90092634f0fc146" (UID: "48512e02022680c9d90092634f0fc146"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:04.763319 master-0 kubenswrapper[28149]: I0313 12:56:04.763311 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"48512e02022680c9d90092634f0fc146\" (UID: \"48512e02022680c9d90092634f0fc146\") " Mar 13 12:56:04.763717 master-0 kubenswrapper[28149]: I0313 12:56:04.763686 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "48512e02022680c9d90092634f0fc146" (UID: "48512e02022680c9d90092634f0fc146"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:04.764603 master-0 kubenswrapper[28149]: I0313 12:56:04.764574 28149 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:04.764603 master-0 kubenswrapper[28149]: I0313 12:56:04.764598 28149 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:04.765232 master-0 kubenswrapper[28149]: I0313 12:56:04.764610 28149 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:04.771664 master-0 kubenswrapper[28149]: I0313 12:56:04.771609 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"316953f3-5e6c-4aaf-802d-85959f7d7760","Type":"ContainerDied","Data":"46ebc7ccc75ee9021060034d3cd0402817d0ce910ad88ed144681d72cd2e24ac"} Mar 13 12:56:04.771664 master-0 kubenswrapper[28149]: I0313 12:56:04.771660 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46ebc7ccc75ee9021060034d3cd0402817d0ce910ad88ed144681d72cd2e24ac" Mar 13 12:56:04.771931 master-0 kubenswrapper[28149]: I0313 12:56:04.771905 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 12:56:04.774774 master-0 kubenswrapper[28149]: I0313 12:56:04.774733 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 13 12:56:04.775766 master-0 kubenswrapper[28149]: I0313 12:56:04.775716 28149 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="c16c28a17a2035273ad3cbe98ed9a765284a80f578c8eb0748ccdf8c0dbcc66a" exitCode=0 Mar 13 12:56:04.775874 master-0 kubenswrapper[28149]: I0313 12:56:04.775793 28149 scope.go:117] "RemoveContainer" containerID="49a3e19d955348f7e8d6cddcf11b1118a4c6f32a3b5d7a34d5989aaa73b1262c" Mar 13 12:56:04.775924 master-0 kubenswrapper[28149]: I0313 12:56:04.775794 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:04.777667 master-0 kubenswrapper[28149]: I0313 12:56:04.777634 28149 status_manager.go:851] "Failed to get status for pod" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:04.778260 master-0 kubenswrapper[28149]: I0313 12:56:04.778212 28149 status_manager.go:851] "Failed to get status for pod" podUID="316953f3-5e6c-4aaf-802d-85959f7d7760" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:04.778977 master-0 kubenswrapper[28149]: I0313 12:56:04.778922 28149 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:04.788539 master-0 kubenswrapper[28149]: I0313 12:56:04.788506 28149 scope.go:117] "RemoveContainer" containerID="52264b4378a4f3ba83334945450ce98ac9bedab1c6c9485cb885bc9488d52471" Mar 13 12:56:04.796798 master-0 kubenswrapper[28149]: I0313 12:56:04.796727 28149 status_manager.go:851] "Failed to get status for pod" podUID="316953f3-5e6c-4aaf-802d-85959f7d7760" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:04.797291 master-0 kubenswrapper[28149]: I0313 12:56:04.797226 28149 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:04.797879 master-0 kubenswrapper[28149]: I0313 12:56:04.797787 28149 status_manager.go:851] "Failed to get status for pod" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:04.804248 master-0 kubenswrapper[28149]: I0313 12:56:04.804203 28149 scope.go:117] "RemoveContainer" containerID="8ea4f4f1bc69f85c977580ddac21514a71e7c8a91de12b17cbd00d640490e4d3" Mar 13 12:56:04.820912 master-0 kubenswrapper[28149]: I0313 12:56:04.820874 28149 scope.go:117] "RemoveContainer" containerID="b498f079133d2a2077770b172efd3507414d1897ced1774403305339c6337d85" Mar 13 12:56:04.842279 master-0 kubenswrapper[28149]: I0313 12:56:04.842251 28149 scope.go:117] "RemoveContainer" containerID="c16c28a17a2035273ad3cbe98ed9a765284a80f578c8eb0748ccdf8c0dbcc66a" Mar 13 12:56:04.856511 master-0 kubenswrapper[28149]: I0313 12:56:04.856481 28149 scope.go:117] "RemoveContainer" containerID="ba0afcdaf159bdee5cad84caecac2caf230f2beacc241756ab48e77be0ee5ebb" Mar 13 12:56:04.870905 master-0 kubenswrapper[28149]: I0313 12:56:04.870879 28149 scope.go:117] "RemoveContainer" containerID="49a3e19d955348f7e8d6cddcf11b1118a4c6f32a3b5d7a34d5989aaa73b1262c" Mar 13 12:56:04.871647 master-0 kubenswrapper[28149]: E0313 12:56:04.871292 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49a3e19d955348f7e8d6cddcf11b1118a4c6f32a3b5d7a34d5989aaa73b1262c\": container with ID starting with 49a3e19d955348f7e8d6cddcf11b1118a4c6f32a3b5d7a34d5989aaa73b1262c not found: ID does not exist" containerID="49a3e19d955348f7e8d6cddcf11b1118a4c6f32a3b5d7a34d5989aaa73b1262c" Mar 13 12:56:04.871647 master-0 kubenswrapper[28149]: I0313 12:56:04.871328 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49a3e19d955348f7e8d6cddcf11b1118a4c6f32a3b5d7a34d5989aaa73b1262c"} err="failed to get container status \"49a3e19d955348f7e8d6cddcf11b1118a4c6f32a3b5d7a34d5989aaa73b1262c\": rpc error: code = NotFound desc = could not find container \"49a3e19d955348f7e8d6cddcf11b1118a4c6f32a3b5d7a34d5989aaa73b1262c\": container with ID starting with 49a3e19d955348f7e8d6cddcf11b1118a4c6f32a3b5d7a34d5989aaa73b1262c not found: ID does not exist" Mar 13 12:56:04.871647 master-0 kubenswrapper[28149]: I0313 12:56:04.871356 28149 scope.go:117] "RemoveContainer" containerID="52264b4378a4f3ba83334945450ce98ac9bedab1c6c9485cb885bc9488d52471" Mar 13 12:56:04.871647 master-0 kubenswrapper[28149]: E0313 12:56:04.871570 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52264b4378a4f3ba83334945450ce98ac9bedab1c6c9485cb885bc9488d52471\": container with ID starting with 52264b4378a4f3ba83334945450ce98ac9bedab1c6c9485cb885bc9488d52471 not found: ID does not exist" containerID="52264b4378a4f3ba83334945450ce98ac9bedab1c6c9485cb885bc9488d52471" Mar 13 12:56:04.871647 master-0 kubenswrapper[28149]: I0313 12:56:04.871592 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52264b4378a4f3ba83334945450ce98ac9bedab1c6c9485cb885bc9488d52471"} err="failed to get container status \"52264b4378a4f3ba83334945450ce98ac9bedab1c6c9485cb885bc9488d52471\": rpc error: code = NotFound desc = could not find container \"52264b4378a4f3ba83334945450ce98ac9bedab1c6c9485cb885bc9488d52471\": container with ID starting with 52264b4378a4f3ba83334945450ce98ac9bedab1c6c9485cb885bc9488d52471 not found: ID does not exist" Mar 13 12:56:04.871647 master-0 kubenswrapper[28149]: I0313 12:56:04.871606 28149 scope.go:117] "RemoveContainer" containerID="8ea4f4f1bc69f85c977580ddac21514a71e7c8a91de12b17cbd00d640490e4d3" Mar 13 12:56:04.871935 master-0 kubenswrapper[28149]: E0313 12:56:04.871807 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ea4f4f1bc69f85c977580ddac21514a71e7c8a91de12b17cbd00d640490e4d3\": container with ID starting with 8ea4f4f1bc69f85c977580ddac21514a71e7c8a91de12b17cbd00d640490e4d3 not found: ID does not exist" containerID="8ea4f4f1bc69f85c977580ddac21514a71e7c8a91de12b17cbd00d640490e4d3" Mar 13 12:56:04.871935 master-0 kubenswrapper[28149]: I0313 12:56:04.871834 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ea4f4f1bc69f85c977580ddac21514a71e7c8a91de12b17cbd00d640490e4d3"} err="failed to get container status \"8ea4f4f1bc69f85c977580ddac21514a71e7c8a91de12b17cbd00d640490e4d3\": rpc error: code = NotFound desc = could not find container \"8ea4f4f1bc69f85c977580ddac21514a71e7c8a91de12b17cbd00d640490e4d3\": container with ID starting with 8ea4f4f1bc69f85c977580ddac21514a71e7c8a91de12b17cbd00d640490e4d3 not found: ID does not exist" Mar 13 12:56:04.871935 master-0 kubenswrapper[28149]: I0313 12:56:04.871855 28149 scope.go:117] "RemoveContainer" containerID="b498f079133d2a2077770b172efd3507414d1897ced1774403305339c6337d85" Mar 13 12:56:04.872301 master-0 kubenswrapper[28149]: E0313 12:56:04.872281 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b498f079133d2a2077770b172efd3507414d1897ced1774403305339c6337d85\": container with ID starting with b498f079133d2a2077770b172efd3507414d1897ced1774403305339c6337d85 not found: ID does not exist" containerID="b498f079133d2a2077770b172efd3507414d1897ced1774403305339c6337d85" Mar 13 12:56:04.872357 master-0 kubenswrapper[28149]: I0313 12:56:04.872305 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b498f079133d2a2077770b172efd3507414d1897ced1774403305339c6337d85"} err="failed to get container status \"b498f079133d2a2077770b172efd3507414d1897ced1774403305339c6337d85\": rpc error: code = NotFound desc = could not find container \"b498f079133d2a2077770b172efd3507414d1897ced1774403305339c6337d85\": container with ID starting with b498f079133d2a2077770b172efd3507414d1897ced1774403305339c6337d85 not found: ID does not exist" Mar 13 12:56:04.872357 master-0 kubenswrapper[28149]: I0313 12:56:04.872322 28149 scope.go:117] "RemoveContainer" containerID="c16c28a17a2035273ad3cbe98ed9a765284a80f578c8eb0748ccdf8c0dbcc66a" Mar 13 12:56:04.872561 master-0 kubenswrapper[28149]: E0313 12:56:04.872528 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c16c28a17a2035273ad3cbe98ed9a765284a80f578c8eb0748ccdf8c0dbcc66a\": container with ID starting with c16c28a17a2035273ad3cbe98ed9a765284a80f578c8eb0748ccdf8c0dbcc66a not found: ID does not exist" containerID="c16c28a17a2035273ad3cbe98ed9a765284a80f578c8eb0748ccdf8c0dbcc66a" Mar 13 12:56:04.872602 master-0 kubenswrapper[28149]: I0313 12:56:04.872556 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c16c28a17a2035273ad3cbe98ed9a765284a80f578c8eb0748ccdf8c0dbcc66a"} err="failed to get container status \"c16c28a17a2035273ad3cbe98ed9a765284a80f578c8eb0748ccdf8c0dbcc66a\": rpc error: code = NotFound desc = could not find container \"c16c28a17a2035273ad3cbe98ed9a765284a80f578c8eb0748ccdf8c0dbcc66a\": container with ID starting with c16c28a17a2035273ad3cbe98ed9a765284a80f578c8eb0748ccdf8c0dbcc66a not found: ID does not exist" Mar 13 12:56:04.872602 master-0 kubenswrapper[28149]: I0313 12:56:04.872570 28149 scope.go:117] "RemoveContainer" containerID="ba0afcdaf159bdee5cad84caecac2caf230f2beacc241756ab48e77be0ee5ebb" Mar 13 12:56:04.872794 master-0 kubenswrapper[28149]: E0313 12:56:04.872760 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba0afcdaf159bdee5cad84caecac2caf230f2beacc241756ab48e77be0ee5ebb\": container with ID starting with ba0afcdaf159bdee5cad84caecac2caf230f2beacc241756ab48e77be0ee5ebb not found: ID does not exist" containerID="ba0afcdaf159bdee5cad84caecac2caf230f2beacc241756ab48e77be0ee5ebb" Mar 13 12:56:04.872794 master-0 kubenswrapper[28149]: I0313 12:56:04.872788 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba0afcdaf159bdee5cad84caecac2caf230f2beacc241756ab48e77be0ee5ebb"} err="failed to get container status \"ba0afcdaf159bdee5cad84caecac2caf230f2beacc241756ab48e77be0ee5ebb\": rpc error: code = NotFound desc = could not find container \"ba0afcdaf159bdee5cad84caecac2caf230f2beacc241756ab48e77be0ee5ebb\": container with ID starting with ba0afcdaf159bdee5cad84caecac2caf230f2beacc241756ab48e77be0ee5ebb not found: ID does not exist" Mar 13 12:56:05.237779 master-0 kubenswrapper[28149]: E0313 12:56:05.237650 28149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 13 12:56:05.250203 master-0 kubenswrapper[28149]: E0313 12:56:05.250071 28149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c67df535043cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:b275ed7e9ce09d69a66613ca3ae3d89e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:56:02.150269903 +0000 UTC m=+135.803735062,LastTimestamp:2026-03-13 12:56:02.150269903 +0000 UTC m=+135.803735062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:56:06.693537 master-0 kubenswrapper[28149]: I0313 12:56:06.693453 28149 status_manager.go:851] "Failed to get status for pod" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:06.694323 master-0 kubenswrapper[28149]: I0313 12:56:06.694264 28149 status_manager.go:851] "Failed to get status for pod" podUID="316953f3-5e6c-4aaf-802d-85959f7d7760" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:06.694905 master-0 kubenswrapper[28149]: I0313 12:56:06.694872 28149 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:06.697096 master-0 kubenswrapper[28149]: I0313 12:56:06.697051 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48512e02022680c9d90092634f0fc146" path="/var/lib/kubelet/pods/48512e02022680c9d90092634f0fc146/volumes" Mar 13 12:56:06.839394 master-0 kubenswrapper[28149]: E0313 12:56:06.839271 28149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 13 12:56:10.041520 master-0 kubenswrapper[28149]: E0313 12:56:10.041419 28149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 13 12:56:12.687051 master-0 kubenswrapper[28149]: I0313 12:56:12.686995 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:12.688786 master-0 kubenswrapper[28149]: I0313 12:56:12.688523 28149 status_manager.go:851] "Failed to get status for pod" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:12.689488 master-0 kubenswrapper[28149]: I0313 12:56:12.689432 28149 status_manager.go:851] "Failed to get status for pod" podUID="316953f3-5e6c-4aaf-802d-85959f7d7760" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:12.707161 master-0 kubenswrapper[28149]: I0313 12:56:12.707064 28149 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83419844-81dc-4775-ad6e-7b003dcb70f7" Mar 13 12:56:12.707334 master-0 kubenswrapper[28149]: I0313 12:56:12.707176 28149 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83419844-81dc-4775-ad6e-7b003dcb70f7" Mar 13 12:56:12.708299 master-0 kubenswrapper[28149]: E0313 12:56:12.708244 28149 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:12.708838 master-0 kubenswrapper[28149]: I0313 12:56:12.708804 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:12.732506 master-0 kubenswrapper[28149]: W0313 12:56:12.732449 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dbd3d3755bd0f9e4667c2fcf3fcf07d.slice/crio-03ceaa6ed5efd76f794d3920c013bade1075973be977b8b9ebb40c3a54eb5c64 WatchSource:0}: Error finding container 03ceaa6ed5efd76f794d3920c013bade1075973be977b8b9ebb40c3a54eb5c64: Status 404 returned error can't find the container with id 03ceaa6ed5efd76f794d3920c013bade1075973be977b8b9ebb40c3a54eb5c64 Mar 13 12:56:13.114948 master-0 kubenswrapper[28149]: I0313 12:56:13.114887 28149 generic.go:334] "Generic (PLEG): container finished" podID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" containerID="0cfa8b6bef5111d81550afc961cc5d40442055df12a5f4d4cba76e309956cd2f" exitCode=0 Mar 13 12:56:13.114948 master-0 kubenswrapper[28149]: I0313 12:56:13.114936 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerDied","Data":"0cfa8b6bef5111d81550afc961cc5d40442055df12a5f4d4cba76e309956cd2f"} Mar 13 12:56:13.115350 master-0 kubenswrapper[28149]: I0313 12:56:13.115006 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"03ceaa6ed5efd76f794d3920c013bade1075973be977b8b9ebb40c3a54eb5c64"} Mar 13 12:56:13.115454 master-0 kubenswrapper[28149]: I0313 12:56:13.115435 28149 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83419844-81dc-4775-ad6e-7b003dcb70f7" Mar 13 12:56:13.115454 master-0 kubenswrapper[28149]: I0313 12:56:13.115453 28149 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83419844-81dc-4775-ad6e-7b003dcb70f7" Mar 13 12:56:13.116729 master-0 kubenswrapper[28149]: I0313 12:56:13.116670 28149 status_manager.go:851] "Failed to get status for pod" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:13.116810 master-0 kubenswrapper[28149]: E0313 12:56:13.116748 28149 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:13.118263 master-0 kubenswrapper[28149]: I0313 12:56:13.118214 28149 status_manager.go:851] "Failed to get status for pod" podUID="316953f3-5e6c-4aaf-802d-85959f7d7760" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:56:14.140106 master-0 kubenswrapper[28149]: I0313 12:56:14.138022 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"a047e3706b63895f3e74acd0291de041ef1db0b446c78bc57d6193246f560b64"} Mar 13 12:56:14.140106 master-0 kubenswrapper[28149]: I0313 12:56:14.138078 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"6d54d07694b88756eed5ec784544f41afeb177d38fee431672f2f1806830a80a"} Mar 13 12:56:14.140106 master-0 kubenswrapper[28149]: I0313 12:56:14.138094 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"1f931e59520ae0a0dbf75bb0e9306327776f272db6c52dfd1f30b6f5938b689e"} Mar 13 12:56:14.140106 master-0 kubenswrapper[28149]: I0313 12:56:14.138106 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"19ac4bf4b16d8ba245ac07d702ab3dccbc49aaffb1e4e1df5e56e796f2dd609b"} Mar 13 12:56:15.148309 master-0 kubenswrapper[28149]: I0313 12:56:15.148210 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"fc0440e7b9a7c3ad712ea86b4100ddbd9d729fe8b5829716fc709abc25a31f86"} Mar 13 12:56:15.149008 master-0 kubenswrapper[28149]: I0313 12:56:15.148412 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:15.149008 master-0 kubenswrapper[28149]: I0313 12:56:15.148601 28149 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83419844-81dc-4775-ad6e-7b003dcb70f7" Mar 13 12:56:15.149008 master-0 kubenswrapper[28149]: I0313 12:56:15.148638 28149 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83419844-81dc-4775-ad6e-7b003dcb70f7" Mar 13 12:56:17.709597 master-0 kubenswrapper[28149]: I0313 12:56:17.709499 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:17.710580 master-0 kubenswrapper[28149]: I0313 12:56:17.709693 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:17.715749 master-0 kubenswrapper[28149]: I0313 12:56:17.715718 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:20.161839 master-0 kubenswrapper[28149]: I0313 12:56:20.161706 28149 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:20.184159 master-0 kubenswrapper[28149]: I0313 12:56:20.184071 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_9b24fda1c2e55a08607764d7b9b24355/kube-controller-manager/0.log" Mar 13 12:56:20.184366 master-0 kubenswrapper[28149]: I0313 12:56:20.184190 28149 generic.go:334] "Generic (PLEG): container finished" podID="9b24fda1c2e55a08607764d7b9b24355" containerID="9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11" exitCode=1 Mar 13 12:56:20.184366 master-0 kubenswrapper[28149]: I0313 12:56:20.184290 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9b24fda1c2e55a08607764d7b9b24355","Type":"ContainerDied","Data":"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11"} Mar 13 12:56:20.184665 master-0 kubenswrapper[28149]: I0313 12:56:20.184647 28149 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83419844-81dc-4775-ad6e-7b003dcb70f7" Mar 13 12:56:20.184665 master-0 kubenswrapper[28149]: I0313 12:56:20.184663 28149 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83419844-81dc-4775-ad6e-7b003dcb70f7" Mar 13 12:56:20.185099 master-0 kubenswrapper[28149]: I0313 12:56:20.185070 28149 scope.go:117] "RemoveContainer" containerID="9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11" Mar 13 12:56:20.191421 master-0 kubenswrapper[28149]: I0313 12:56:20.191382 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:20.257108 master-0 kubenswrapper[28149]: I0313 12:56:20.257052 28149 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" podUID="e873ce95-712e-4eaf-903c-44aaeb8a50a8" Mar 13 12:56:21.068575 master-0 kubenswrapper[28149]: I0313 12:56:21.068507 28149 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:56:21.194466 master-0 kubenswrapper[28149]: I0313 12:56:21.194406 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_9b24fda1c2e55a08607764d7b9b24355/kube-controller-manager/0.log" Mar 13 12:56:21.195067 master-0 kubenswrapper[28149]: I0313 12:56:21.194532 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9b24fda1c2e55a08607764d7b9b24355","Type":"ContainerStarted","Data":"cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4"} Mar 13 12:56:21.195164 master-0 kubenswrapper[28149]: I0313 12:56:21.195077 28149 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83419844-81dc-4775-ad6e-7b003dcb70f7" Mar 13 12:56:21.195164 master-0 kubenswrapper[28149]: I0313 12:56:21.195101 28149 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83419844-81dc-4775-ad6e-7b003dcb70f7" Mar 13 12:56:21.220112 master-0 kubenswrapper[28149]: I0313 12:56:21.220040 28149 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" podUID="e873ce95-712e-4eaf-903c-44aaeb8a50a8" Mar 13 12:56:26.267486 master-0 kubenswrapper[28149]: I0313 12:56:26.267410 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 12:56:26.469676 master-0 kubenswrapper[28149]: I0313 12:56:26.469581 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 12:56:26.642519 master-0 kubenswrapper[28149]: I0313 12:56:26.642365 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:56:26.646117 master-0 kubenswrapper[28149]: I0313 12:56:26.646070 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-r2lqd" Mar 13 12:56:26.794313 master-0 kubenswrapper[28149]: I0313 12:56:26.794188 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 12:56:26.909358 master-0 kubenswrapper[28149]: I0313 12:56:26.909266 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 12:56:27.031104 master-0 kubenswrapper[28149]: I0313 12:56:27.031007 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 12:56:27.321228 master-0 kubenswrapper[28149]: I0313 12:56:27.321165 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 12:56:27.624529 master-0 kubenswrapper[28149]: I0313 12:56:27.624396 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 12:56:28.375909 master-0 kubenswrapper[28149]: I0313 12:56:28.375427 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 12:56:28.379418 master-0 kubenswrapper[28149]: I0313 12:56:28.379374 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 12:56:28.492888 master-0 kubenswrapper[28149]: I0313 12:56:28.492842 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:56:28.535831 master-0 kubenswrapper[28149]: I0313 12:56:28.535757 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 12:56:28.676052 master-0 kubenswrapper[28149]: I0313 12:56:28.675744 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 12:56:28.838089 master-0 kubenswrapper[28149]: I0313 12:56:28.838017 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-bb7kx" Mar 13 12:56:28.894616 master-0 kubenswrapper[28149]: I0313 12:56:28.894529 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 12:56:29.058750 master-0 kubenswrapper[28149]: I0313 12:56:29.058614 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:56:29.071429 master-0 kubenswrapper[28149]: I0313 12:56:29.071375 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:56:29.166574 master-0 kubenswrapper[28149]: I0313 12:56:29.166533 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 12:56:29.326410 master-0 kubenswrapper[28149]: I0313 12:56:29.326249 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 12:56:29.424601 master-0 kubenswrapper[28149]: I0313 12:56:29.424562 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 12:56:29.439132 master-0 kubenswrapper[28149]: I0313 12:56:29.439063 28149 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 12:56:29.502685 master-0 kubenswrapper[28149]: I0313 12:56:29.502616 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 12:56:29.528509 master-0 kubenswrapper[28149]: I0313 12:56:29.528436 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 12:56:29.550153 master-0 kubenswrapper[28149]: I0313 12:56:29.550077 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-8mlcv" Mar 13 12:56:29.647685 master-0 kubenswrapper[28149]: I0313 12:56:29.647557 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 12:56:29.678384 master-0 kubenswrapper[28149]: I0313 12:56:29.678333 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 12:56:29.896050 master-0 kubenswrapper[28149]: I0313 12:56:29.895982 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-a1r15je3eljsi" Mar 13 12:56:29.905036 master-0 kubenswrapper[28149]: I0313 12:56:29.904939 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 12:56:29.930352 master-0 kubenswrapper[28149]: I0313 12:56:29.930286 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 13 12:56:30.057302 master-0 kubenswrapper[28149]: I0313 12:56:30.057216 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-zpmf6" Mar 13 12:56:30.276506 master-0 kubenswrapper[28149]: I0313 12:56:30.276455 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 12:56:30.346525 master-0 kubenswrapper[28149]: I0313 12:56:30.346458 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 13 12:56:30.554210 master-0 kubenswrapper[28149]: I0313 12:56:30.554033 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 12:56:30.558196 master-0 kubenswrapper[28149]: I0313 12:56:30.558165 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 12:56:30.587403 master-0 kubenswrapper[28149]: I0313 12:56:30.587346 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 13 12:56:30.611558 master-0 kubenswrapper[28149]: I0313 12:56:30.611482 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 12:56:30.663027 master-0 kubenswrapper[28149]: I0313 12:56:30.662969 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 12:56:30.673241 master-0 kubenswrapper[28149]: I0313 12:56:30.673207 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 13 12:56:30.874773 master-0 kubenswrapper[28149]: I0313 12:56:30.874669 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 12:56:30.972910 master-0 kubenswrapper[28149]: I0313 12:56:30.972831 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 12:56:31.033346 master-0 kubenswrapper[28149]: I0313 12:56:31.033278 28149 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 12:56:31.219302 master-0 kubenswrapper[28149]: I0313 12:56:31.219123 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 12:56:31.525168 master-0 kubenswrapper[28149]: I0313 12:56:31.524629 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 12:56:31.559550 master-0 kubenswrapper[28149]: I0313 12:56:31.559485 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 12:56:31.579295 master-0 kubenswrapper[28149]: I0313 12:56:31.578856 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 12:56:31.749501 master-0 kubenswrapper[28149]: I0313 12:56:31.749439 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 12:56:31.962209 master-0 kubenswrapper[28149]: I0313 12:56:31.957220 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 12:56:32.081751 master-0 kubenswrapper[28149]: I0313 12:56:32.081701 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 12:56:32.178172 master-0 kubenswrapper[28149]: I0313 12:56:32.178107 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 13 12:56:32.450832 master-0 kubenswrapper[28149]: I0313 12:56:32.450769 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:56:32.664316 master-0 kubenswrapper[28149]: I0313 12:56:32.664268 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 13 12:56:32.791145 master-0 kubenswrapper[28149]: I0313 12:56:32.791082 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 13 12:56:33.224525 master-0 kubenswrapper[28149]: I0313 12:56:33.224389 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-59mr8" Mar 13 12:56:33.297578 master-0 kubenswrapper[28149]: I0313 12:56:33.297533 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 13 12:56:33.298894 master-0 kubenswrapper[28149]: I0313 12:56:33.298841 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 12:56:33.402676 master-0 kubenswrapper[28149]: I0313 12:56:33.401913 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:56:33.637879 master-0 kubenswrapper[28149]: I0313 12:56:33.637820 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 12:56:33.800612 master-0 kubenswrapper[28149]: I0313 12:56:33.800533 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 13 12:56:33.949713 master-0 kubenswrapper[28149]: I0313 12:56:33.949570 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:56:34.159889 master-0 kubenswrapper[28149]: I0313 12:56:34.159826 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 12:56:34.287622 master-0 kubenswrapper[28149]: I0313 12:56:34.287581 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 12:56:34.289049 master-0 kubenswrapper[28149]: I0313 12:56:34.289006 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 12:56:34.519040 master-0 kubenswrapper[28149]: I0313 12:56:34.518966 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 12:56:34.545649 master-0 kubenswrapper[28149]: I0313 12:56:34.545503 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 13 12:56:34.560206 master-0 kubenswrapper[28149]: I0313 12:56:34.560148 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 12:56:34.877932 master-0 kubenswrapper[28149]: I0313 12:56:34.877778 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 12:56:35.030310 master-0 kubenswrapper[28149]: I0313 12:56:35.030253 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-zhlhv" Mar 13 12:56:35.131823 master-0 kubenswrapper[28149]: I0313 12:56:35.131714 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-lpcnm" Mar 13 12:56:35.166949 master-0 kubenswrapper[28149]: I0313 12:56:35.166907 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 13 12:56:35.186926 master-0 kubenswrapper[28149]: I0313 12:56:35.186860 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 12:56:35.300757 master-0 kubenswrapper[28149]: I0313 12:56:35.300700 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-lz982" Mar 13 12:56:35.496673 master-0 kubenswrapper[28149]: I0313 12:56:35.496566 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 12:56:35.760763 master-0 kubenswrapper[28149]: I0313 12:56:35.760600 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 12:56:35.862564 master-0 kubenswrapper[28149]: I0313 12:56:35.862511 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 12:56:36.055051 master-0 kubenswrapper[28149]: I0313 12:56:36.055012 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 12:56:36.147114 master-0 kubenswrapper[28149]: I0313 12:56:36.147074 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 12:56:36.200421 master-0 kubenswrapper[28149]: I0313 12:56:36.200367 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 12:56:36.254705 master-0 kubenswrapper[28149]: I0313 12:56:36.254666 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-sk2p7" Mar 13 12:56:36.306064 master-0 kubenswrapper[28149]: I0313 12:56:36.305962 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 12:56:36.330267 master-0 kubenswrapper[28149]: I0313 12:56:36.330222 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-58w8f" Mar 13 12:56:36.369987 master-0 kubenswrapper[28149]: I0313 12:56:36.369928 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 12:56:36.383364 master-0 kubenswrapper[28149]: I0313 12:56:36.383326 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 13 12:56:36.415428 master-0 kubenswrapper[28149]: I0313 12:56:36.415397 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-lkhsh" Mar 13 12:56:36.481467 master-0 kubenswrapper[28149]: I0313 12:56:36.481409 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 12:56:36.491314 master-0 kubenswrapper[28149]: I0313 12:56:36.491261 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-6hdw2" Mar 13 12:56:36.540929 master-0 kubenswrapper[28149]: I0313 12:56:36.540882 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 12:56:36.542859 master-0 kubenswrapper[28149]: I0313 12:56:36.542796 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 12:56:36.621516 master-0 kubenswrapper[28149]: I0313 12:56:36.621055 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:56:36.642335 master-0 kubenswrapper[28149]: I0313 12:56:36.642291 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 12:56:36.673541 master-0 kubenswrapper[28149]: I0313 12:56:36.673475 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 12:56:36.823900 master-0 kubenswrapper[28149]: I0313 12:56:36.823860 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 13 12:56:36.877213 master-0 kubenswrapper[28149]: I0313 12:56:36.877077 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 12:56:37.189066 master-0 kubenswrapper[28149]: I0313 12:56:37.188744 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 12:56:37.266719 master-0 kubenswrapper[28149]: I0313 12:56:37.266678 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 12:56:37.309466 master-0 kubenswrapper[28149]: I0313 12:56:37.309408 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 12:56:37.342487 master-0 kubenswrapper[28149]: I0313 12:56:37.342420 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 12:56:37.348707 master-0 kubenswrapper[28149]: I0313 12:56:37.348644 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 12:56:37.364858 master-0 kubenswrapper[28149]: I0313 12:56:37.364792 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 12:56:37.453579 master-0 kubenswrapper[28149]: I0313 12:56:37.453447 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-99fzl" Mar 13 12:56:37.577156 master-0 kubenswrapper[28149]: I0313 12:56:37.575289 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 12:56:37.603534 master-0 kubenswrapper[28149]: I0313 12:56:37.603228 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 13 12:56:37.604604 master-0 kubenswrapper[28149]: I0313 12:56:37.604576 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 12:56:37.714186 master-0 kubenswrapper[28149]: I0313 12:56:37.714023 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 12:56:37.763998 master-0 kubenswrapper[28149]: I0313 12:56:37.763961 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 12:56:37.775033 master-0 kubenswrapper[28149]: I0313 12:56:37.774999 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 13 12:56:37.784162 master-0 kubenswrapper[28149]: I0313 12:56:37.783349 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 12:56:37.821267 master-0 kubenswrapper[28149]: I0313 12:56:37.821223 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 12:56:37.829513 master-0 kubenswrapper[28149]: I0313 12:56:37.829400 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:56:38.053691 master-0 kubenswrapper[28149]: I0313 12:56:38.053651 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 12:56:38.083130 master-0 kubenswrapper[28149]: I0313 12:56:38.083081 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 12:56:38.166123 master-0 kubenswrapper[28149]: I0313 12:56:38.166067 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 12:56:38.329071 master-0 kubenswrapper[28149]: I0313 12:56:38.328914 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-gsftw" Mar 13 12:56:38.381720 master-0 kubenswrapper[28149]: I0313 12:56:38.381671 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 13 12:56:38.422617 master-0 kubenswrapper[28149]: I0313 12:56:38.422558 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 12:56:38.553105 master-0 kubenswrapper[28149]: I0313 12:56:38.553061 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-5ddms" Mar 13 12:56:38.641323 master-0 kubenswrapper[28149]: I0313 12:56:38.641211 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 12:56:38.692429 master-0 kubenswrapper[28149]: I0313 12:56:38.692386 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 12:56:38.766128 master-0 kubenswrapper[28149]: I0313 12:56:38.766063 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 12:56:38.791101 master-0 kubenswrapper[28149]: I0313 12:56:38.791041 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 12:56:38.814672 master-0 kubenswrapper[28149]: I0313 12:56:38.814615 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 12:56:38.863016 master-0 kubenswrapper[28149]: I0313 12:56:38.862831 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 12:56:38.900992 master-0 kubenswrapper[28149]: I0313 12:56:38.900890 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 12:56:38.946915 master-0 kubenswrapper[28149]: I0313 12:56:38.946816 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 12:56:39.003419 master-0 kubenswrapper[28149]: I0313 12:56:39.003362 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 12:56:39.096837 master-0 kubenswrapper[28149]: I0313 12:56:39.096792 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 12:56:39.121092 master-0 kubenswrapper[28149]: I0313 12:56:39.121046 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 12:56:39.147687 master-0 kubenswrapper[28149]: I0313 12:56:39.147633 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 12:56:39.297187 master-0 kubenswrapper[28149]: I0313 12:56:39.297122 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 13 12:56:39.495992 master-0 kubenswrapper[28149]: I0313 12:56:39.435692 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 12:56:39.495992 master-0 kubenswrapper[28149]: I0313 12:56:39.447024 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 13 12:56:39.520223 master-0 kubenswrapper[28149]: I0313 12:56:39.516282 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 12:56:39.526159 master-0 kubenswrapper[28149]: I0313 12:56:39.523490 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 12:56:39.687219 master-0 kubenswrapper[28149]: I0313 12:56:39.687099 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-z42c9" Mar 13 12:56:39.802557 master-0 kubenswrapper[28149]: I0313 12:56:39.802486 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-m9n95" Mar 13 12:56:39.898686 master-0 kubenswrapper[28149]: I0313 12:56:39.898645 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-m2z2f" Mar 13 12:56:39.994270 master-0 kubenswrapper[28149]: I0313 12:56:39.993669 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 13 12:56:40.034411 master-0 kubenswrapper[28149]: I0313 12:56:40.034322 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 12:56:40.039948 master-0 kubenswrapper[28149]: I0313 12:56:40.039918 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 12:56:40.047920 master-0 kubenswrapper[28149]: I0313 12:56:40.047888 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-vj5mr" Mar 13 12:56:40.064922 master-0 kubenswrapper[28149]: I0313 12:56:40.064864 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 12:56:40.083748 master-0 kubenswrapper[28149]: I0313 12:56:40.083706 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 13 12:56:40.155853 master-0 kubenswrapper[28149]: I0313 12:56:40.155806 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 12:56:40.227765 master-0 kubenswrapper[28149]: I0313 12:56:40.227706 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 12:56:40.457708 master-0 kubenswrapper[28149]: I0313 12:56:40.457649 28149 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 12:56:40.460880 master-0 kubenswrapper[28149]: I0313 12:56:40.460824 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=39.460796242 podStartE2EDuration="39.460796242s" podCreationTimestamp="2026-03-13 12:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:56:20.175969574 +0000 UTC m=+153.829434733" watchObservedRunningTime="2026-03-13 12:56:40.460796242 +0000 UTC m=+174.114261401" Mar 13 12:56:40.463791 master-0 kubenswrapper[28149]: I0313 12:56:40.463751 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:56:40.463864 master-0 kubenswrapper[28149]: I0313 12:56:40.463799 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0","openshift-image-registry/node-ca-m4wkf","openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7"] Mar 13 12:56:40.464099 master-0 kubenswrapper[28149]: E0313 12:56:40.464074 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316953f3-5e6c-4aaf-802d-85959f7d7760" containerName="installer" Mar 13 12:56:40.464381 master-0 kubenswrapper[28149]: I0313 12:56:40.464353 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="316953f3-5e6c-4aaf-802d-85959f7d7760" containerName="installer" Mar 13 12:56:40.464535 master-0 kubenswrapper[28149]: I0313 12:56:40.464510 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="316953f3-5e6c-4aaf-802d-85959f7d7760" containerName="installer" Mar 13 12:56:40.465436 master-0 kubenswrapper[28149]: I0313 12:56:40.465395 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m4wkf" Mar 13 12:56:40.466960 master-0 kubenswrapper[28149]: I0313 12:56:40.466890 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.467039 master-0 kubenswrapper[28149]: I0313 12:56:40.466954 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 13 12:56:40.467039 master-0 kubenswrapper[28149]: I0313 12:56:40.466974 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-5vgmk" Mar 13 12:56:40.469296 master-0 kubenswrapper[28149]: I0313 12:56:40.469271 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 13 12:56:40.469680 master-0 kubenswrapper[28149]: I0313 12:56:40.469656 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 13 12:56:40.469771 master-0 kubenswrapper[28149]: I0313 12:56:40.469751 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-3ka2p05q78a8k" Mar 13 12:56:40.469962 master-0 kubenswrapper[28149]: I0313 12:56:40.469931 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 13 12:56:40.470443 master-0 kubenswrapper[28149]: I0313 12:56:40.470418 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 13 12:56:40.471645 master-0 kubenswrapper[28149]: I0313 12:56:40.471622 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 13 12:56:40.471840 master-0 kubenswrapper[28149]: I0313 12:56:40.471823 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:56:40.504498 master-0 kubenswrapper[28149]: I0313 12:56:40.504396 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=20.504371186 podStartE2EDuration="20.504371186s" podCreationTimestamp="2026-03-13 12:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:56:40.498099609 +0000 UTC m=+174.151564768" watchObservedRunningTime="2026-03-13 12:56:40.504371186 +0000 UTC m=+174.157836345" Mar 13 12:56:40.509809 master-0 kubenswrapper[28149]: I0313 12:56:40.509774 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 12:56:40.533389 master-0 kubenswrapper[28149]: I0313 12:56:40.533316 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vrk\" (UniqueName: \"kubernetes.io/projected/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-kube-api-access-g9vrk\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.533389 master-0 kubenswrapper[28149]: I0313 12:56:40.533370 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-metrics-client-ca\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.533659 master-0 kubenswrapper[28149]: I0313 12:56:40.533433 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.533659 master-0 kubenswrapper[28149]: I0313 12:56:40.533512 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-grpc-tls\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.533659 master-0 kubenswrapper[28149]: I0313 12:56:40.533533 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8mqv\" (UniqueName: \"kubernetes.io/projected/b6227f22-a46e-43f2-94a1-1920a0391302-kube-api-access-c8mqv\") pod \"node-ca-m4wkf\" (UID: \"b6227f22-a46e-43f2-94a1-1920a0391302\") " pod="openshift-image-registry/node-ca-m4wkf" Mar 13 12:56:40.533659 master-0 kubenswrapper[28149]: I0313 12:56:40.533555 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.533659 master-0 kubenswrapper[28149]: I0313 12:56:40.533598 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.533659 master-0 kubenswrapper[28149]: I0313 12:56:40.533630 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b6227f22-a46e-43f2-94a1-1920a0391302-serviceca\") pod \"node-ca-m4wkf\" (UID: \"b6227f22-a46e-43f2-94a1-1920a0391302\") " pod="openshift-image-registry/node-ca-m4wkf" Mar 13 12:56:40.533659 master-0 kubenswrapper[28149]: I0313 12:56:40.533661 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-thanos-querier-tls\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.533973 master-0 kubenswrapper[28149]: I0313 12:56:40.533688 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.533973 master-0 kubenswrapper[28149]: I0313 12:56:40.533710 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b6227f22-a46e-43f2-94a1-1920a0391302-host\") pod \"node-ca-m4wkf\" (UID: \"b6227f22-a46e-43f2-94a1-1920a0391302\") " pod="openshift-image-registry/node-ca-m4wkf" Mar 13 12:56:40.553374 master-0 kubenswrapper[28149]: I0313 12:56:40.553157 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 12:56:40.560700 master-0 kubenswrapper[28149]: I0313 12:56:40.560565 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 12:56:40.607954 master-0 kubenswrapper[28149]: I0313 12:56:40.607915 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 13 12:56:40.634680 master-0 kubenswrapper[28149]: I0313 12:56:40.634639 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b6227f22-a46e-43f2-94a1-1920a0391302-serviceca\") pod \"node-ca-m4wkf\" (UID: \"b6227f22-a46e-43f2-94a1-1920a0391302\") " pod="openshift-image-registry/node-ca-m4wkf" Mar 13 12:56:40.634932 master-0 kubenswrapper[28149]: I0313 12:56:40.634917 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-thanos-querier-tls\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.635028 master-0 kubenswrapper[28149]: I0313 12:56:40.635012 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.635106 master-0 kubenswrapper[28149]: I0313 12:56:40.635095 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b6227f22-a46e-43f2-94a1-1920a0391302-host\") pod \"node-ca-m4wkf\" (UID: \"b6227f22-a46e-43f2-94a1-1920a0391302\") " pod="openshift-image-registry/node-ca-m4wkf" Mar 13 12:56:40.635270 master-0 kubenswrapper[28149]: I0313 12:56:40.635254 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9vrk\" (UniqueName: \"kubernetes.io/projected/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-kube-api-access-g9vrk\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.635356 master-0 kubenswrapper[28149]: I0313 12:56:40.635345 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-metrics-client-ca\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.635429 master-0 kubenswrapper[28149]: I0313 12:56:40.635418 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.635509 master-0 kubenswrapper[28149]: I0313 12:56:40.635492 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-grpc-tls\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.635598 master-0 kubenswrapper[28149]: I0313 12:56:40.635586 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8mqv\" (UniqueName: \"kubernetes.io/projected/b6227f22-a46e-43f2-94a1-1920a0391302-kube-api-access-c8mqv\") pod \"node-ca-m4wkf\" (UID: \"b6227f22-a46e-43f2-94a1-1920a0391302\") " pod="openshift-image-registry/node-ca-m4wkf" Mar 13 12:56:40.635673 master-0 kubenswrapper[28149]: I0313 12:56:40.635662 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.635760 master-0 kubenswrapper[28149]: I0313 12:56:40.635748 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.643301 master-0 kubenswrapper[28149]: I0313 12:56:40.643269 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b6227f22-a46e-43f2-94a1-1920a0391302-serviceca\") pod \"node-ca-m4wkf\" (UID: \"b6227f22-a46e-43f2-94a1-1920a0391302\") " pod="openshift-image-registry/node-ca-m4wkf" Mar 13 12:56:40.643374 master-0 kubenswrapper[28149]: I0313 12:56:40.643269 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.643374 master-0 kubenswrapper[28149]: I0313 12:56:40.643353 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b6227f22-a46e-43f2-94a1-1920a0391302-host\") pod \"node-ca-m4wkf\" (UID: \"b6227f22-a46e-43f2-94a1-1920a0391302\") " pod="openshift-image-registry/node-ca-m4wkf" Mar 13 12:56:40.643472 master-0 kubenswrapper[28149]: I0313 12:56:40.643456 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-metrics-client-ca\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.646103 master-0 kubenswrapper[28149]: I0313 12:56:40.646064 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.646550 master-0 kubenswrapper[28149]: I0313 12:56:40.646522 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-grpc-tls\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.646949 master-0 kubenswrapper[28149]: I0313 12:56:40.646931 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.648498 master-0 kubenswrapper[28149]: I0313 12:56:40.648454 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.649150 master-0 kubenswrapper[28149]: I0313 12:56:40.649098 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-secret-thanos-querier-tls\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.670914 master-0 kubenswrapper[28149]: I0313 12:56:40.670864 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8mqv\" (UniqueName: \"kubernetes.io/projected/b6227f22-a46e-43f2-94a1-1920a0391302-kube-api-access-c8mqv\") pod \"node-ca-m4wkf\" (UID: \"b6227f22-a46e-43f2-94a1-1920a0391302\") " pod="openshift-image-registry/node-ca-m4wkf" Mar 13 12:56:40.688577 master-0 kubenswrapper[28149]: I0313 12:56:40.688528 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 12:56:40.693697 master-0 kubenswrapper[28149]: I0313 12:56:40.693667 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9vrk\" (UniqueName: \"kubernetes.io/projected/fff0844d-5dfa-4c93-bc4d-f01a6f356afe-kube-api-access-g9vrk\") pod \"thanos-querier-7bbcc57d7b-tv2k7\" (UID: \"fff0844d-5dfa-4c93-bc4d-f01a6f356afe\") " pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.697810 master-0 kubenswrapper[28149]: I0313 12:56:40.697770 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7"] Mar 13 12:56:40.748711 master-0 kubenswrapper[28149]: I0313 12:56:40.748591 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 12:56:40.758956 master-0 kubenswrapper[28149]: I0313 12:56:40.758923 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 12:56:40.792062 master-0 kubenswrapper[28149]: I0313 12:56:40.792025 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m4wkf" Mar 13 12:56:40.810306 master-0 kubenswrapper[28149]: I0313 12:56:40.810269 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:40.846156 master-0 kubenswrapper[28149]: I0313 12:56:40.846042 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 12:56:40.871188 master-0 kubenswrapper[28149]: I0313 12:56:40.870655 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 12:56:40.877465 master-0 kubenswrapper[28149]: I0313 12:56:40.877430 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 12:56:40.882042 master-0 kubenswrapper[28149]: I0313 12:56:40.882005 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 13 12:56:40.898563 master-0 kubenswrapper[28149]: I0313 12:56:40.898515 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 12:56:40.969516 master-0 kubenswrapper[28149]: I0313 12:56:40.969456 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 12:56:40.978156 master-0 kubenswrapper[28149]: I0313 12:56:40.978102 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 12:56:40.981266 master-0 kubenswrapper[28149]: I0313 12:56:40.981241 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 13 12:56:41.002947 master-0 kubenswrapper[28149]: I0313 12:56:41.002828 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 12:56:41.018133 master-0 kubenswrapper[28149]: I0313 12:56:41.018089 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 12:56:41.139242 master-0 kubenswrapper[28149]: I0313 12:56:41.139191 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 12:56:41.235813 master-0 kubenswrapper[28149]: I0313 12:56:41.235763 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 12:56:41.241316 master-0 kubenswrapper[28149]: I0313 12:56:41.241276 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 12:56:41.273689 master-0 kubenswrapper[28149]: I0313 12:56:41.273643 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 12:56:41.374113 master-0 kubenswrapper[28149]: I0313 12:56:41.374068 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 13 12:56:41.375652 master-0 kubenswrapper[28149]: I0313 12:56:41.375002 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 13 12:56:41.410076 master-0 kubenswrapper[28149]: I0313 12:56:41.409950 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 12:56:41.425640 master-0 kubenswrapper[28149]: I0313 12:56:41.425594 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 12:56:41.491695 master-0 kubenswrapper[28149]: I0313 12:56:41.491652 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 13 12:56:41.553637 master-0 kubenswrapper[28149]: I0313 12:56:41.553293 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m4wkf" event={"ID":"b6227f22-a46e-43f2-94a1-1920a0391302","Type":"ContainerStarted","Data":"04e5c0ae872fb7a173509c11e4006d6cc212aa3bee0827f917199e3d05bf60f8"} Mar 13 12:56:41.604606 master-0 kubenswrapper[28149]: I0313 12:56:41.604530 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 13 12:56:41.608945 master-0 kubenswrapper[28149]: I0313 12:56:41.608913 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 13 12:56:41.656402 master-0 kubenswrapper[28149]: I0313 12:56:41.655930 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-84b66c585b-f7g5r"] Mar 13 12:56:41.657511 master-0 kubenswrapper[28149]: I0313 12:56:41.657044 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:41.664800 master-0 kubenswrapper[28149]: I0313 12:56:41.664370 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-bffv53ajdsuh9" Mar 13 12:56:41.679613 master-0 kubenswrapper[28149]: I0313 12:56:41.679556 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-567b9cf7f-cxnj2"] Mar 13 12:56:41.679997 master-0 kubenswrapper[28149]: I0313 12:56:41.679961 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" podUID="fc192c03-5aec-4507-a702-56bf98c96e9c" containerName="metrics-server" containerID="cri-o://6446a8dda38eb9740b431e3cbbce0e66637311ae9d8e6bde203aefb67d8183fd" gracePeriod=170 Mar 13 12:56:41.692422 master-0 kubenswrapper[28149]: I0313 12:56:41.692370 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-84b66c585b-f7g5r"] Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.758959 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a529d528-3bd9-4512-9ae8-8284329c9c4c-secret-metrics-server-tls\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.759006 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a529d528-3bd9-4512-9ae8-8284329c9c4c-secret-metrics-client-certs\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.759026 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k76nj\" (UniqueName: \"kubernetes.io/projected/a529d528-3bd9-4512-9ae8-8284329c9c4c-kube-api-access-k76nj\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.759047 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a529d528-3bd9-4512-9ae8-8284329c9c4c-client-ca-bundle\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.759152 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a529d528-3bd9-4512-9ae8-8284329c9c4c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.759206 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a529d528-3bd9-4512-9ae8-8284329c9c4c-metrics-server-audit-profiles\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.759230 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a529d528-3bd9-4512-9ae8-8284329c9c4c-audit-log\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.860523 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a529d528-3bd9-4512-9ae8-8284329c9c4c-secret-metrics-server-tls\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.860573 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a529d528-3bd9-4512-9ae8-8284329c9c4c-secret-metrics-client-certs\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.860595 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k76nj\" (UniqueName: \"kubernetes.io/projected/a529d528-3bd9-4512-9ae8-8284329c9c4c-kube-api-access-k76nj\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.860612 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a529d528-3bd9-4512-9ae8-8284329c9c4c-client-ca-bundle\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.860652 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a529d528-3bd9-4512-9ae8-8284329c9c4c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.860768 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a529d528-3bd9-4512-9ae8-8284329c9c4c-metrics-server-audit-profiles\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.860795 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a529d528-3bd9-4512-9ae8-8284329c9c4c-audit-log\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.861407 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a529d528-3bd9-4512-9ae8-8284329c9c4c-audit-log\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.875203 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a529d528-3bd9-4512-9ae8-8284329c9c4c-secret-metrics-client-certs\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.878791 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a529d528-3bd9-4512-9ae8-8284329c9c4c-client-ca-bundle\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.880071 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a529d528-3bd9-4512-9ae8-8284329c9c4c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.881942 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a529d528-3bd9-4512-9ae8-8284329c9c4c-metrics-server-audit-profiles\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.882515 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a529d528-3bd9-4512-9ae8-8284329c9c4c-secret-metrics-server-tls\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.261898 master-0 kubenswrapper[28149]: I0313 12:56:41.970468 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 13 12:56:42.267116 master-0 kubenswrapper[28149]: I0313 12:56:42.267051 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 12:56:42.267480 master-0 kubenswrapper[28149]: I0313 12:56:42.267434 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 12:56:42.267743 master-0 kubenswrapper[28149]: I0313 12:56:42.267711 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 12:56:42.267902 master-0 kubenswrapper[28149]: I0313 12:56:42.267874 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 12:56:42.268092 master-0 kubenswrapper[28149]: I0313 12:56:42.268060 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 13 12:56:42.268320 master-0 kubenswrapper[28149]: I0313 12:56:42.268290 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 12:56:42.270004 master-0 kubenswrapper[28149]: I0313 12:56:42.269950 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 12:56:42.270984 master-0 kubenswrapper[28149]: I0313 12:56:42.270937 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 12:56:42.271736 master-0 kubenswrapper[28149]: I0313 12:56:42.271695 28149 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 12:56:42.277043 master-0 kubenswrapper[28149]: I0313 12:56:42.276983 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 13 12:56:42.366392 master-0 kubenswrapper[28149]: I0313 12:56:42.366356 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k76nj\" (UniqueName: \"kubernetes.io/projected/a529d528-3bd9-4512-9ae8-8284329c9c4c-kube-api-access-k76nj\") pod \"metrics-server-84b66c585b-f7g5r\" (UID: \"a529d528-3bd9-4512-9ae8-8284329c9c4c\") " pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.370802 master-0 kubenswrapper[28149]: I0313 12:56:42.370772 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 13 12:56:42.417017 master-0 kubenswrapper[28149]: I0313 12:56:42.416973 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 12:56:42.488506 master-0 kubenswrapper[28149]: I0313 12:56:42.488457 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-647df5cfcf-7dtwq"] Mar 13 12:56:42.489940 master-0 kubenswrapper[28149]: I0313 12:56:42.489914 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.498234 master-0 kubenswrapper[28149]: I0313 12:56:42.498178 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 13 12:56:42.502226 master-0 kubenswrapper[28149]: I0313 12:56:42.502192 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 13 12:56:42.502346 master-0 kubenswrapper[28149]: I0313 12:56:42.502324 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 13 12:56:42.502432 master-0 kubenswrapper[28149]: I0313 12:56:42.502412 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 13 12:56:42.502489 master-0 kubenswrapper[28149]: I0313 12:56:42.502463 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 12:56:42.502594 master-0 kubenswrapper[28149]: I0313 12:56:42.502571 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 13 12:56:42.507617 master-0 kubenswrapper[28149]: I0313 12:56:42.507566 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 13 12:56:42.519333 master-0 kubenswrapper[28149]: I0313 12:56:42.514818 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-647df5cfcf-7dtwq"] Mar 13 12:56:42.581431 master-0 kubenswrapper[28149]: I0313 12:56:42.581377 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:56:42.587775 master-0 kubenswrapper[28149]: I0313 12:56:42.587705 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 12:56:42.656838 master-0 kubenswrapper[28149]: I0313 12:56:42.656668 28149 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:56:42.657024 master-0 kubenswrapper[28149]: I0313 12:56:42.656977 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" containerID="cri-o://3308b8ce530f3a4e5f17e073b604a57926c719321024108b3eb887a2ba16cf6e" gracePeriod=5 Mar 13 12:56:42.684369 master-0 kubenswrapper[28149]: I0313 12:56:42.684194 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98lsk\" (UniqueName: \"kubernetes.io/projected/06eeb16c-c683-4bfe-b243-df34da90042b-kube-api-access-98lsk\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.684369 master-0 kubenswrapper[28149]: I0313 12:56:42.684235 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06eeb16c-c683-4bfe-b243-df34da90042b-serving-certs-ca-bundle\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.684369 master-0 kubenswrapper[28149]: I0313 12:56:42.684262 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-secret-telemeter-client\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.684369 master-0 kubenswrapper[28149]: I0313 12:56:42.684299 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.684369 master-0 kubenswrapper[28149]: I0313 12:56:42.684316 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06eeb16c-c683-4bfe-b243-df34da90042b-metrics-client-ca\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.684369 master-0 kubenswrapper[28149]: I0313 12:56:42.684327 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 12:56:42.684369 master-0 kubenswrapper[28149]: I0313 12:56:42.684340 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.684729 master-0 kubenswrapper[28149]: I0313 12:56:42.684515 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-trusted-ca-bundle\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.684729 master-0 kubenswrapper[28149]: I0313 12:56:42.684672 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-federate-client-tls\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.705818 master-0 kubenswrapper[28149]: I0313 12:56:42.705781 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 12:56:42.735857 master-0 kubenswrapper[28149]: I0313 12:56:42.735790 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 12:56:42.877615 master-0 kubenswrapper[28149]: I0313 12:56:42.877481 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98lsk\" (UniqueName: \"kubernetes.io/projected/06eeb16c-c683-4bfe-b243-df34da90042b-kube-api-access-98lsk\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.877830 master-0 kubenswrapper[28149]: I0313 12:56:42.877807 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06eeb16c-c683-4bfe-b243-df34da90042b-serving-certs-ca-bundle\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.878104 master-0 kubenswrapper[28149]: I0313 12:56:42.878065 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-secret-telemeter-client\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.878240 master-0 kubenswrapper[28149]: I0313 12:56:42.878199 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.878302 master-0 kubenswrapper[28149]: I0313 12:56:42.878241 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06eeb16c-c683-4bfe-b243-df34da90042b-metrics-client-ca\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.878302 master-0 kubenswrapper[28149]: I0313 12:56:42.878267 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.878394 master-0 kubenswrapper[28149]: I0313 12:56:42.878344 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-trusted-ca-bundle\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.878458 master-0 kubenswrapper[28149]: I0313 12:56:42.878438 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-federate-client-tls\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.878603 master-0 kubenswrapper[28149]: E0313 12:56:42.878578 28149 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 13 12:56:42.878677 master-0 kubenswrapper[28149]: E0313 12:56:42.878660 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls podName:06eeb16c-c683-4bfe-b243-df34da90042b nodeName:}" failed. No retries permitted until 2026-03-13 12:56:43.378632481 +0000 UTC m=+177.032097640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls") pod "telemeter-client-647df5cfcf-7dtwq" (UID: "06eeb16c-c683-4bfe-b243-df34da90042b") : secret "telemeter-client-tls" not found Mar 13 12:56:42.879297 master-0 kubenswrapper[28149]: I0313 12:56:42.879267 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 12:56:42.879389 master-0 kubenswrapper[28149]: I0313 12:56:42.879369 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-tqljd" Mar 13 12:56:42.879494 master-0 kubenswrapper[28149]: I0313 12:56:42.879472 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06eeb16c-c683-4bfe-b243-df34da90042b-serving-certs-ca-bundle\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.879902 master-0 kubenswrapper[28149]: I0313 12:56:42.879867 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-trusted-ca-bundle\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.879902 master-0 kubenswrapper[28149]: I0313 12:56:42.879880 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06eeb16c-c683-4bfe-b243-df34da90042b-metrics-client-ca\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.887280 master-0 kubenswrapper[28149]: I0313 12:56:42.887240 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.887474 master-0 kubenswrapper[28149]: I0313 12:56:42.887431 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-federate-client-tls\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.889851 master-0 kubenswrapper[28149]: I0313 12:56:42.889819 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-secret-telemeter-client\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.903810 master-0 kubenswrapper[28149]: I0313 12:56:42.903747 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98lsk\" (UniqueName: \"kubernetes.io/projected/06eeb16c-c683-4bfe-b243-df34da90042b-kube-api-access-98lsk\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:42.977944 master-0 kubenswrapper[28149]: I0313 12:56:42.977901 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 13 12:56:43.032112 master-0 kubenswrapper[28149]: I0313 12:56:43.032070 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 12:56:43.078005 master-0 kubenswrapper[28149]: I0313 12:56:43.077847 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 12:56:43.103413 master-0 kubenswrapper[28149]: I0313 12:56:43.103357 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 13 12:56:43.267764 master-0 kubenswrapper[28149]: I0313 12:56:43.267707 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:56:43.294077 master-0 kubenswrapper[28149]: I0313 12:56:43.294032 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-lt6vb" Mar 13 12:56:43.300555 master-0 kubenswrapper[28149]: I0313 12:56:43.300411 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 12:56:43.358675 master-0 kubenswrapper[28149]: I0313 12:56:43.358638 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-tw2tq" Mar 13 12:56:43.398103 master-0 kubenswrapper[28149]: I0313 12:56:43.398012 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:43.398230 master-0 kubenswrapper[28149]: E0313 12:56:43.398199 28149 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 13 12:56:43.398293 master-0 kubenswrapper[28149]: E0313 12:56:43.398264 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls podName:06eeb16c-c683-4bfe-b243-df34da90042b nodeName:}" failed. No retries permitted until 2026-03-13 12:56:44.398246683 +0000 UTC m=+178.051711852 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls") pod "telemeter-client-647df5cfcf-7dtwq" (UID: "06eeb16c-c683-4bfe-b243-df34da90042b") : secret "telemeter-client-tls" not found Mar 13 12:56:43.609213 master-0 kubenswrapper[28149]: I0313 12:56:43.609069 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 12:56:43.609947 master-0 kubenswrapper[28149]: I0313 12:56:43.609919 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:56:43.610229 master-0 kubenswrapper[28149]: I0313 12:56:43.610182 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-4jl9c" Mar 13 12:56:43.706551 master-0 kubenswrapper[28149]: I0313 12:56:43.706488 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 13 12:56:43.752979 master-0 kubenswrapper[28149]: I0313 12:56:43.752923 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 12:56:43.758623 master-0 kubenswrapper[28149]: I0313 12:56:43.758326 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 12:56:43.859946 master-0 kubenswrapper[28149]: I0313 12:56:43.859849 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 12:56:43.901098 master-0 kubenswrapper[28149]: I0313 12:56:43.901055 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 12:56:44.019788 master-0 kubenswrapper[28149]: I0313 12:56:44.019727 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 13 12:56:44.058244 master-0 kubenswrapper[28149]: I0313 12:56:44.058162 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:56:44.096380 master-0 kubenswrapper[28149]: I0313 12:56:44.096317 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 12:56:44.215747 master-0 kubenswrapper[28149]: I0313 12:56:44.215623 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 13 12:56:44.217153 master-0 kubenswrapper[28149]: I0313 12:56:44.217113 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 12:56:44.245634 master-0 kubenswrapper[28149]: I0313 12:56:44.245583 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 13 12:56:44.301778 master-0 kubenswrapper[28149]: I0313 12:56:44.301547 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 13 12:56:44.332564 master-0 kubenswrapper[28149]: I0313 12:56:44.332514 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 13 12:56:44.347442 master-0 kubenswrapper[28149]: I0313 12:56:44.346248 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 12:56:44.373104 master-0 kubenswrapper[28149]: E0313 12:56:44.373055 28149 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 12:56:44.373104 master-0 kubenswrapper[28149]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_thanos-querier-7bbcc57d7b-tv2k7_openshift-monitoring_fff0844d-5dfa-4c93-bc4d-f01a6f356afe_0(831c0155d3e341bebb2260fddab2de6e72e8af42dca53a3c3c0ee202b7aaa929): error adding pod openshift-monitoring_thanos-querier-7bbcc57d7b-tv2k7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"831c0155d3e341bebb2260fddab2de6e72e8af42dca53a3c3c0ee202b7aaa929" Netns:"/var/run/netns/53617744-69fd-48eb-a572-eea669d6a5a3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=thanos-querier-7bbcc57d7b-tv2k7;K8S_POD_INFRA_CONTAINER_ID=831c0155d3e341bebb2260fddab2de6e72e8af42dca53a3c3c0ee202b7aaa929;K8S_POD_UID=fff0844d-5dfa-4c93-bc4d-f01a6f356afe" Path:"" ERRORED: error configuring pod [openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7] networking: Multus: [openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7/fff0844d-5dfa-4c93-bc4d-f01a6f356afe]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod thanos-querier-7bbcc57d7b-tv2k7 in out of cluster comm: pod "thanos-querier-7bbcc57d7b-tv2k7" not found Mar 13 12:56:44.373104 master-0 kubenswrapper[28149]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:56:44.373104 master-0 kubenswrapper[28149]: > Mar 13 12:56:44.373382 master-0 kubenswrapper[28149]: E0313 12:56:44.373190 28149 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 12:56:44.373382 master-0 kubenswrapper[28149]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_thanos-querier-7bbcc57d7b-tv2k7_openshift-monitoring_fff0844d-5dfa-4c93-bc4d-f01a6f356afe_0(831c0155d3e341bebb2260fddab2de6e72e8af42dca53a3c3c0ee202b7aaa929): error adding pod openshift-monitoring_thanos-querier-7bbcc57d7b-tv2k7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"831c0155d3e341bebb2260fddab2de6e72e8af42dca53a3c3c0ee202b7aaa929" Netns:"/var/run/netns/53617744-69fd-48eb-a572-eea669d6a5a3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=thanos-querier-7bbcc57d7b-tv2k7;K8S_POD_INFRA_CONTAINER_ID=831c0155d3e341bebb2260fddab2de6e72e8af42dca53a3c3c0ee202b7aaa929;K8S_POD_UID=fff0844d-5dfa-4c93-bc4d-f01a6f356afe" Path:"" ERRORED: error configuring pod [openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7] networking: Multus: [openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7/fff0844d-5dfa-4c93-bc4d-f01a6f356afe]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod thanos-querier-7bbcc57d7b-tv2k7 in out of cluster comm: pod "thanos-querier-7bbcc57d7b-tv2k7" not found Mar 13 12:56:44.373382 master-0 kubenswrapper[28149]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:56:44.373382 master-0 kubenswrapper[28149]: > pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:44.373382 master-0 kubenswrapper[28149]: E0313 12:56:44.373221 28149 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 12:56:44.373382 master-0 kubenswrapper[28149]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_thanos-querier-7bbcc57d7b-tv2k7_openshift-monitoring_fff0844d-5dfa-4c93-bc4d-f01a6f356afe_0(831c0155d3e341bebb2260fddab2de6e72e8af42dca53a3c3c0ee202b7aaa929): error adding pod openshift-monitoring_thanos-querier-7bbcc57d7b-tv2k7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"831c0155d3e341bebb2260fddab2de6e72e8af42dca53a3c3c0ee202b7aaa929" Netns:"/var/run/netns/53617744-69fd-48eb-a572-eea669d6a5a3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=thanos-querier-7bbcc57d7b-tv2k7;K8S_POD_INFRA_CONTAINER_ID=831c0155d3e341bebb2260fddab2de6e72e8af42dca53a3c3c0ee202b7aaa929;K8S_POD_UID=fff0844d-5dfa-4c93-bc4d-f01a6f356afe" Path:"" ERRORED: error configuring pod [openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7] networking: Multus: [openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7/fff0844d-5dfa-4c93-bc4d-f01a6f356afe]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod thanos-querier-7bbcc57d7b-tv2k7 in out of cluster comm: pod "thanos-querier-7bbcc57d7b-tv2k7" not found Mar 13 12:56:44.373382 master-0 kubenswrapper[28149]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:56:44.373382 master-0 kubenswrapper[28149]: > pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:44.373382 master-0 kubenswrapper[28149]: E0313 12:56:44.373310 28149 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"thanos-querier-7bbcc57d7b-tv2k7_openshift-monitoring(fff0844d-5dfa-4c93-bc4d-f01a6f356afe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"thanos-querier-7bbcc57d7b-tv2k7_openshift-monitoring(fff0844d-5dfa-4c93-bc4d-f01a6f356afe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_thanos-querier-7bbcc57d7b-tv2k7_openshift-monitoring_fff0844d-5dfa-4c93-bc4d-f01a6f356afe_0(831c0155d3e341bebb2260fddab2de6e72e8af42dca53a3c3c0ee202b7aaa929): error adding pod openshift-monitoring_thanos-querier-7bbcc57d7b-tv2k7 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"831c0155d3e341bebb2260fddab2de6e72e8af42dca53a3c3c0ee202b7aaa929\\\" Netns:\\\"/var/run/netns/53617744-69fd-48eb-a572-eea669d6a5a3\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=thanos-querier-7bbcc57d7b-tv2k7;K8S_POD_INFRA_CONTAINER_ID=831c0155d3e341bebb2260fddab2de6e72e8af42dca53a3c3c0ee202b7aaa929;K8S_POD_UID=fff0844d-5dfa-4c93-bc4d-f01a6f356afe\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7] networking: Multus: [openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7/fff0844d-5dfa-4c93-bc4d-f01a6f356afe]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod thanos-querier-7bbcc57d7b-tv2k7 in out of cluster comm: pod \\\"thanos-querier-7bbcc57d7b-tv2k7\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" podUID="fff0844d-5dfa-4c93-bc4d-f01a6f356afe" Mar 13 12:56:44.399078 master-0 kubenswrapper[28149]: I0313 12:56:44.399021 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 12:56:44.413636 master-0 kubenswrapper[28149]: I0313 12:56:44.413573 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:44.414000 master-0 kubenswrapper[28149]: E0313 12:56:44.413964 28149 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 13 12:56:44.414080 master-0 kubenswrapper[28149]: E0313 12:56:44.414058 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls podName:06eeb16c-c683-4bfe-b243-df34da90042b nodeName:}" failed. No retries permitted until 2026-03-13 12:56:46.414033692 +0000 UTC m=+180.067498881 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls") pod "telemeter-client-647df5cfcf-7dtwq" (UID: "06eeb16c-c683-4bfe-b243-df34da90042b") : secret "telemeter-client-tls" not found Mar 13 12:56:44.461290 master-0 kubenswrapper[28149]: I0313 12:56:44.460520 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 12:56:44.490395 master-0 kubenswrapper[28149]: I0313 12:56:44.490320 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 12:56:44.514741 master-0 kubenswrapper[28149]: I0313 12:56:44.514685 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 13 12:56:44.524733 master-0 kubenswrapper[28149]: I0313 12:56:44.524713 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 12:56:44.617206 master-0 kubenswrapper[28149]: I0313 12:56:44.617049 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 12:56:44.635547 master-0 kubenswrapper[28149]: I0313 12:56:44.626682 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:44.635547 master-0 kubenswrapper[28149]: I0313 12:56:44.627195 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:44.635547 master-0 kubenswrapper[28149]: I0313 12:56:44.627919 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m4wkf" event={"ID":"b6227f22-a46e-43f2-94a1-1920a0391302","Type":"ContainerStarted","Data":"55b746ae88b09d2010251b09e187b1acd55b558d92138af81692fc839695a6a0"} Mar 13 12:56:44.654304 master-0 kubenswrapper[28149]: I0313 12:56:44.654205 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-m4wkf" podStartSLOduration=5.355607816 podStartE2EDuration="7.654181858s" podCreationTimestamp="2026-03-13 12:56:37 +0000 UTC" firstStartedPulling="2026-03-13 12:56:40.830434868 +0000 UTC m=+174.483900017" lastFinishedPulling="2026-03-13 12:56:43.12900891 +0000 UTC m=+176.782474059" observedRunningTime="2026-03-13 12:56:44.651732483 +0000 UTC m=+178.305197642" watchObservedRunningTime="2026-03-13 12:56:44.654181858 +0000 UTC m=+178.307647047" Mar 13 12:56:44.680261 master-0 kubenswrapper[28149]: I0313 12:56:44.679930 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-fs9mz" Mar 13 12:56:44.695467 master-0 kubenswrapper[28149]: I0313 12:56:44.695407 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-b9zx6" Mar 13 12:56:44.739578 master-0 kubenswrapper[28149]: I0313 12:56:44.739513 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 13 12:56:44.868519 master-0 kubenswrapper[28149]: I0313 12:56:44.868405 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 12:56:44.879565 master-0 kubenswrapper[28149]: I0313 12:56:44.879524 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 13 12:56:44.927070 master-0 kubenswrapper[28149]: I0313 12:56:44.927013 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:56:44.940565 master-0 kubenswrapper[28149]: I0313 12:56:44.940491 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 12:56:44.941052 master-0 kubenswrapper[28149]: I0313 12:56:44.941002 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 12:56:44.947882 master-0 kubenswrapper[28149]: I0313 12:56:44.947845 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 12:56:44.972208 master-0 kubenswrapper[28149]: I0313 12:56:44.963652 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 12:56:44.991471 master-0 kubenswrapper[28149]: I0313 12:56:44.991416 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:56:45.131946 master-0 kubenswrapper[28149]: I0313 12:56:45.131817 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 12:56:45.464289 master-0 kubenswrapper[28149]: I0313 12:56:45.464066 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 12:56:45.640864 master-0 kubenswrapper[28149]: I0313 12:56:45.640700 28149 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 12:56:45.671322 master-0 kubenswrapper[28149]: I0313 12:56:45.670881 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 12:56:45.706881 master-0 kubenswrapper[28149]: I0313 12:56:45.706837 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7"] Mar 13 12:56:45.748048 master-0 kubenswrapper[28149]: I0313 12:56:45.748006 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-84b66c585b-f7g5r"] Mar 13 12:56:45.761257 master-0 kubenswrapper[28149]: W0313 12:56:45.760980 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda529d528_3bd9_4512_9ae8_8284329c9c4c.slice/crio-9c8707b04f224989255903164255e86f321bab58e4199f480bced5aba684f5a7 WatchSource:0}: Error finding container 9c8707b04f224989255903164255e86f321bab58e4199f480bced5aba684f5a7: Status 404 returned error can't find the container with id 9c8707b04f224989255903164255e86f321bab58e4199f480bced5aba684f5a7 Mar 13 12:56:45.967284 master-0 kubenswrapper[28149]: I0313 12:56:45.967237 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 12:56:45.991027 master-0 kubenswrapper[28149]: I0313 12:56:45.990998 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-grrfm" Mar 13 12:56:46.038920 master-0 kubenswrapper[28149]: I0313 12:56:46.038784 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 13 12:56:46.291632 master-0 kubenswrapper[28149]: I0313 12:56:46.291602 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 12:56:46.368821 master-0 kubenswrapper[28149]: I0313 12:56:46.368790 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 13 12:56:46.416747 master-0 kubenswrapper[28149]: I0313 12:56:46.416710 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:56:46.446237 master-0 kubenswrapper[28149]: I0313 12:56:46.446198 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:46.446587 master-0 kubenswrapper[28149]: E0313 12:56:46.446571 28149 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 13 12:56:46.446703 master-0 kubenswrapper[28149]: E0313 12:56:46.446693 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls podName:06eeb16c-c683-4bfe-b243-df34da90042b nodeName:}" failed. No retries permitted until 2026-03-13 12:56:50.446673969 +0000 UTC m=+184.100139128 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls") pod "telemeter-client-647df5cfcf-7dtwq" (UID: "06eeb16c-c683-4bfe-b243-df34da90042b") : secret "telemeter-client-tls" not found Mar 13 12:56:46.509430 master-0 kubenswrapper[28149]: I0313 12:56:46.509395 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 12:56:46.591878 master-0 kubenswrapper[28149]: I0313 12:56:46.591767 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-qvxhm" Mar 13 12:56:46.649155 master-0 kubenswrapper[28149]: I0313 12:56:46.649089 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" event={"ID":"fff0844d-5dfa-4c93-bc4d-f01a6f356afe","Type":"ContainerStarted","Data":"9aff6d304a7e571cfffca4fbfa77d286becc4c450d84526e75c3fb4177bf0fff"} Mar 13 12:56:46.656971 master-0 kubenswrapper[28149]: I0313 12:56:46.656538 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" event={"ID":"a529d528-3bd9-4512-9ae8-8284329c9c4c","Type":"ContainerStarted","Data":"b3ebc1c98831318fb9a32cbb04fa75518544c6a497978d67636ddabd4743a436"} Mar 13 12:56:46.656971 master-0 kubenswrapper[28149]: I0313 12:56:46.656619 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" event={"ID":"a529d528-3bd9-4512-9ae8-8284329c9c4c","Type":"ContainerStarted","Data":"9c8707b04f224989255903164255e86f321bab58e4199f480bced5aba684f5a7"} Mar 13 12:56:46.658898 master-0 kubenswrapper[28149]: I0313 12:56:46.658533 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 13 12:56:46.680695 master-0 kubenswrapper[28149]: I0313 12:56:46.679393 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" podStartSLOduration=5.679372476 podStartE2EDuration="5.679372476s" podCreationTimestamp="2026-03-13 12:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:56:46.675164193 +0000 UTC m=+180.328629362" watchObservedRunningTime="2026-03-13 12:56:46.679372476 +0000 UTC m=+180.332837645" Mar 13 12:56:46.866867 master-0 kubenswrapper[28149]: I0313 12:56:46.866759 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 12:56:46.936311 master-0 kubenswrapper[28149]: I0313 12:56:46.936251 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 12:56:46.990195 master-0 kubenswrapper[28149]: I0313 12:56:46.990131 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-76p65" Mar 13 12:56:47.027962 master-0 kubenswrapper[28149]: I0313 12:56:47.027923 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 12:56:47.036623 master-0 kubenswrapper[28149]: I0313 12:56:47.036580 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 12:56:47.171882 master-0 kubenswrapper[28149]: I0313 12:56:47.171781 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 12:56:47.309854 master-0 kubenswrapper[28149]: I0313 12:56:47.309760 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 12:56:47.454534 master-0 kubenswrapper[28149]: I0313 12:56:47.454365 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 13 12:56:47.588771 master-0 kubenswrapper[28149]: I0313 12:56:47.588682 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:56:47.627463 master-0 kubenswrapper[28149]: I0313 12:56:47.627418 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 13 12:56:47.666012 master-0 kubenswrapper[28149]: I0313 12:56:47.665974 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 12:56:47.768059 master-0 kubenswrapper[28149]: I0313 12:56:47.768031 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 12:56:47.790166 master-0 kubenswrapper[28149]: I0313 12:56:47.790072 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 12:56:47.861943 master-0 kubenswrapper[28149]: I0313 12:56:47.861907 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 13 12:56:47.863357 master-0 kubenswrapper[28149]: I0313 12:56:47.863332 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 12:56:47.923437 master-0 kubenswrapper[28149]: I0313 12:56:47.923331 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 12:56:47.925037 master-0 kubenswrapper[28149]: I0313 12:56:47.925001 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 13 12:56:48.098183 master-0 kubenswrapper[28149]: I0313 12:56:48.098029 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:56:48.180243 master-0 kubenswrapper[28149]: I0313 12:56:48.180155 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 12:56:48.268839 master-0 kubenswrapper[28149]: I0313 12:56:48.268783 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_b275ed7e9ce09d69a66613ca3ae3d89e/startup-monitor/0.log" Mar 13 12:56:48.269033 master-0 kubenswrapper[28149]: I0313 12:56:48.268949 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:48.433226 master-0 kubenswrapper[28149]: I0313 12:56:48.433178 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-7ntw6" Mar 13 12:56:48.473243 master-0 kubenswrapper[28149]: I0313 12:56:48.473089 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 13 12:56:48.473243 master-0 kubenswrapper[28149]: I0313 12:56:48.473177 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 13 12:56:48.473863 master-0 kubenswrapper[28149]: I0313 12:56:48.473300 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 13 12:56:48.473863 master-0 kubenswrapper[28149]: I0313 12:56:48.473344 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 13 12:56:48.473863 master-0 kubenswrapper[28149]: I0313 12:56:48.473386 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 13 12:56:48.473863 master-0 kubenswrapper[28149]: I0313 12:56:48.473581 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log" (OuterVolumeSpecName: "var-log") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:48.473863 master-0 kubenswrapper[28149]: I0313 12:56:48.473862 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:48.475067 master-0 kubenswrapper[28149]: I0313 12:56:48.473885 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock" (OuterVolumeSpecName: "var-lock") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:48.475067 master-0 kubenswrapper[28149]: I0313 12:56:48.473905 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests" (OuterVolumeSpecName: "manifests") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:48.475067 master-0 kubenswrapper[28149]: I0313 12:56:48.474006 28149 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:48.475067 master-0 kubenswrapper[28149]: I0313 12:56:48.474021 28149 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:48.475067 master-0 kubenswrapper[28149]: I0313 12:56:48.474031 28149 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:48.481067 master-0 kubenswrapper[28149]: I0313 12:56:48.481002 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:48.493241 master-0 kubenswrapper[28149]: I0313 12:56:48.493154 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 12:56:48.575429 master-0 kubenswrapper[28149]: I0313 12:56:48.575375 28149 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:48.575429 master-0 kubenswrapper[28149]: I0313 12:56:48.575423 28149 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:48.677734 master-0 kubenswrapper[28149]: I0313 12:56:48.677682 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" event={"ID":"fff0844d-5dfa-4c93-bc4d-f01a6f356afe","Type":"ContainerStarted","Data":"d16437df54d609d4654cafbbcfd7fcfd290efe4b0103e73d0601e9852ce86ea6"} Mar 13 12:56:48.678682 master-0 kubenswrapper[28149]: I0313 12:56:48.678650 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" event={"ID":"fff0844d-5dfa-4c93-bc4d-f01a6f356afe","Type":"ContainerStarted","Data":"f1667fa07b0a56794ed129714927f882d4f2e41a787e7393cb441a3f3320af7f"} Mar 13 12:56:48.678755 master-0 kubenswrapper[28149]: I0313 12:56:48.678689 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" event={"ID":"fff0844d-5dfa-4c93-bc4d-f01a6f356afe","Type":"ContainerStarted","Data":"03cb6840ec6b6feb84659429123dc31b42337eadf0fc766fc16558c5ca330170"} Mar 13 12:56:48.682073 master-0 kubenswrapper[28149]: I0313 12:56:48.681647 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_b275ed7e9ce09d69a66613ca3ae3d89e/startup-monitor/0.log" Mar 13 12:56:48.682073 master-0 kubenswrapper[28149]: I0313 12:56:48.681743 28149 generic.go:334] "Generic (PLEG): container finished" podID="b275ed7e9ce09d69a66613ca3ae3d89e" containerID="3308b8ce530f3a4e5f17e073b604a57926c719321024108b3eb887a2ba16cf6e" exitCode=137 Mar 13 12:56:48.682073 master-0 kubenswrapper[28149]: I0313 12:56:48.681804 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:48.682382 master-0 kubenswrapper[28149]: I0313 12:56:48.681810 28149 scope.go:117] "RemoveContainer" containerID="3308b8ce530f3a4e5f17e073b604a57926c719321024108b3eb887a2ba16cf6e" Mar 13 12:56:48.696625 master-0 kubenswrapper[28149]: I0313 12:56:48.696586 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" path="/var/lib/kubelet/pods/b275ed7e9ce09d69a66613ca3ae3d89e/volumes" Mar 13 12:56:48.696894 master-0 kubenswrapper[28149]: I0313 12:56:48.696878 28149 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 13 12:56:48.700677 master-0 kubenswrapper[28149]: I0313 12:56:48.700636 28149 scope.go:117] "RemoveContainer" containerID="3308b8ce530f3a4e5f17e073b604a57926c719321024108b3eb887a2ba16cf6e" Mar 13 12:56:48.701059 master-0 kubenswrapper[28149]: E0313 12:56:48.701026 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3308b8ce530f3a4e5f17e073b604a57926c719321024108b3eb887a2ba16cf6e\": container with ID starting with 3308b8ce530f3a4e5f17e073b604a57926c719321024108b3eb887a2ba16cf6e not found: ID does not exist" containerID="3308b8ce530f3a4e5f17e073b604a57926c719321024108b3eb887a2ba16cf6e" Mar 13 12:56:48.701103 master-0 kubenswrapper[28149]: I0313 12:56:48.701058 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3308b8ce530f3a4e5f17e073b604a57926c719321024108b3eb887a2ba16cf6e"} err="failed to get container status \"3308b8ce530f3a4e5f17e073b604a57926c719321024108b3eb887a2ba16cf6e\": rpc error: code = NotFound desc = could not find container \"3308b8ce530f3a4e5f17e073b604a57926c719321024108b3eb887a2ba16cf6e\": container with ID starting with 3308b8ce530f3a4e5f17e073b604a57926c719321024108b3eb887a2ba16cf6e not found: ID does not exist" Mar 13 12:56:48.717346 master-0 kubenswrapper[28149]: I0313 12:56:48.717301 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:56:48.717346 master-0 kubenswrapper[28149]: I0313 12:56:48.717333 28149 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="5356c3e3-6829-4105-b857-99548d0454d9" Mar 13 12:56:48.722156 master-0 kubenswrapper[28149]: I0313 12:56:48.722106 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:56:48.722237 master-0 kubenswrapper[28149]: I0313 12:56:48.722157 28149 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="5356c3e3-6829-4105-b857-99548d0454d9" Mar 13 12:56:48.827781 master-0 kubenswrapper[28149]: I0313 12:56:48.827689 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 12:56:48.923890 master-0 kubenswrapper[28149]: I0313 12:56:48.923747 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-g2ksc" Mar 13 12:56:49.284749 master-0 kubenswrapper[28149]: I0313 12:56:49.284688 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 13 12:56:49.697646 master-0 kubenswrapper[28149]: I0313 12:56:49.697564 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" event={"ID":"fff0844d-5dfa-4c93-bc4d-f01a6f356afe","Type":"ContainerStarted","Data":"0f8164655aac5db5f9c104cf66f37c00f6303fee8faf8a06091dc25858f4b75d"} Mar 13 12:56:50.502016 master-0 kubenswrapper[28149]: I0313 12:56:50.501878 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:50.502240 master-0 kubenswrapper[28149]: E0313 12:56:50.502111 28149 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 13 12:56:50.502310 master-0 kubenswrapper[28149]: E0313 12:56:50.502271 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls podName:06eeb16c-c683-4bfe-b243-df34da90042b nodeName:}" failed. No retries permitted until 2026-03-13 12:56:58.502240333 +0000 UTC m=+192.155705532 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls") pod "telemeter-client-647df5cfcf-7dtwq" (UID: "06eeb16c-c683-4bfe-b243-df34da90042b") : secret "telemeter-client-tls" not found Mar 13 12:56:50.714237 master-0 kubenswrapper[28149]: I0313 12:56:50.714172 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" event={"ID":"fff0844d-5dfa-4c93-bc4d-f01a6f356afe","Type":"ContainerStarted","Data":"9cbf9b1381fe12e0f6e57249412a77e856b4acf4c86b3c228e745bcb53220f0b"} Mar 13 12:56:50.714237 master-0 kubenswrapper[28149]: I0313 12:56:50.714229 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" event={"ID":"fff0844d-5dfa-4c93-bc4d-f01a6f356afe","Type":"ContainerStarted","Data":"d8c298558840c2d1a0394408ba32864e9fedc1548a6cd088497d8dae89b1d14b"} Mar 13 12:56:50.714976 master-0 kubenswrapper[28149]: I0313 12:56:50.714514 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:50.754777 master-0 kubenswrapper[28149]: I0313 12:56:50.754620 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" podStartSLOduration=8.91721716 podStartE2EDuration="12.754595675s" podCreationTimestamp="2026-03-13 12:56:38 +0000 UTC" firstStartedPulling="2026-03-13 12:56:45.706491393 +0000 UTC m=+179.359956552" lastFinishedPulling="2026-03-13 12:56:49.543869908 +0000 UTC m=+183.197335067" observedRunningTime="2026-03-13 12:56:50.747841985 +0000 UTC m=+184.401307144" watchObservedRunningTime="2026-03-13 12:56:50.754595675 +0000 UTC m=+184.408060844" Mar 13 12:56:55.819503 master-0 kubenswrapper[28149]: I0313 12:56:55.819419 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-7bbcc57d7b-tv2k7" Mar 13 12:56:58.311879 master-0 kubenswrapper[28149]: I0313 12:56:58.311803 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-5lklz"] Mar 13 12:56:58.312548 master-0 kubenswrapper[28149]: E0313 12:56:58.312198 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" Mar 13 12:56:58.312548 master-0 kubenswrapper[28149]: I0313 12:56:58.312213 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" Mar 13 12:56:58.312548 master-0 kubenswrapper[28149]: I0313 12:56:58.312408 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" Mar 13 12:56:58.313030 master-0 kubenswrapper[28149]: I0313 12:56:58.312993 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:56:58.334121 master-0 kubenswrapper[28149]: I0313 12:56:58.333932 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 13 12:56:58.348552 master-0 kubenswrapper[28149]: I0313 12:56:58.348475 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 13 12:56:58.349371 master-0 kubenswrapper[28149]: I0313 12:56:58.349339 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 13 12:56:58.356741 master-0 kubenswrapper[28149]: I0313 12:56:58.356702 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 13 12:56:58.364691 master-0 kubenswrapper[28149]: I0313 12:56:58.364651 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 13 12:56:58.371073 master-0 kubenswrapper[28149]: I0313 12:56:58.371033 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-5lklz"] Mar 13 12:56:58.390096 master-0 kubenswrapper[28149]: I0313 12:56:58.390056 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkwzx\" (UniqueName: \"kubernetes.io/projected/226f8faa-22b5-465f-892c-3a7541177046-kube-api-access-tkwzx\") pod \"console-operator-6c7fb6b958-5lklz\" (UID: \"226f8faa-22b5-465f-892c-3a7541177046\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:56:58.390381 master-0 kubenswrapper[28149]: I0313 12:56:58.390365 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/226f8faa-22b5-465f-892c-3a7541177046-serving-cert\") pod \"console-operator-6c7fb6b958-5lklz\" (UID: \"226f8faa-22b5-465f-892c-3a7541177046\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:56:58.390506 master-0 kubenswrapper[28149]: I0313 12:56:58.390491 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226f8faa-22b5-465f-892c-3a7541177046-config\") pod \"console-operator-6c7fb6b958-5lklz\" (UID: \"226f8faa-22b5-465f-892c-3a7541177046\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:56:58.390587 master-0 kubenswrapper[28149]: I0313 12:56:58.390567 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/226f8faa-22b5-465f-892c-3a7541177046-trusted-ca\") pod \"console-operator-6c7fb6b958-5lklz\" (UID: \"226f8faa-22b5-465f-892c-3a7541177046\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:56:58.491504 master-0 kubenswrapper[28149]: I0313 12:56:58.491461 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226f8faa-22b5-465f-892c-3a7541177046-config\") pod \"console-operator-6c7fb6b958-5lklz\" (UID: \"226f8faa-22b5-465f-892c-3a7541177046\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:56:58.491750 master-0 kubenswrapper[28149]: I0313 12:56:58.491735 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/226f8faa-22b5-465f-892c-3a7541177046-trusted-ca\") pod \"console-operator-6c7fb6b958-5lklz\" (UID: \"226f8faa-22b5-465f-892c-3a7541177046\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:56:58.491876 master-0 kubenswrapper[28149]: I0313 12:56:58.491861 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkwzx\" (UniqueName: \"kubernetes.io/projected/226f8faa-22b5-465f-892c-3a7541177046-kube-api-access-tkwzx\") pod \"console-operator-6c7fb6b958-5lklz\" (UID: \"226f8faa-22b5-465f-892c-3a7541177046\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:56:58.491989 master-0 kubenswrapper[28149]: I0313 12:56:58.491976 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/226f8faa-22b5-465f-892c-3a7541177046-serving-cert\") pod \"console-operator-6c7fb6b958-5lklz\" (UID: \"226f8faa-22b5-465f-892c-3a7541177046\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:56:58.492776 master-0 kubenswrapper[28149]: I0313 12:56:58.492574 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226f8faa-22b5-465f-892c-3a7541177046-config\") pod \"console-operator-6c7fb6b958-5lklz\" (UID: \"226f8faa-22b5-465f-892c-3a7541177046\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:56:58.494097 master-0 kubenswrapper[28149]: I0313 12:56:58.493875 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/226f8faa-22b5-465f-892c-3a7541177046-trusted-ca\") pod \"console-operator-6c7fb6b958-5lklz\" (UID: \"226f8faa-22b5-465f-892c-3a7541177046\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:56:58.494991 master-0 kubenswrapper[28149]: I0313 12:56:58.494960 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/226f8faa-22b5-465f-892c-3a7541177046-serving-cert\") pod \"console-operator-6c7fb6b958-5lklz\" (UID: \"226f8faa-22b5-465f-892c-3a7541177046\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:56:58.507110 master-0 kubenswrapper[28149]: I0313 12:56:58.507065 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkwzx\" (UniqueName: \"kubernetes.io/projected/226f8faa-22b5-465f-892c-3a7541177046-kube-api-access-tkwzx\") pod \"console-operator-6c7fb6b958-5lklz\" (UID: \"226f8faa-22b5-465f-892c-3a7541177046\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:56:58.593174 master-0 kubenswrapper[28149]: I0313 12:56:58.592969 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:56:58.593433 master-0 kubenswrapper[28149]: E0313 12:56:58.593212 28149 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 13 12:56:58.593433 master-0 kubenswrapper[28149]: E0313 12:56:58.593317 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls podName:06eeb16c-c683-4bfe-b243-df34da90042b nodeName:}" failed. No retries permitted until 2026-03-13 12:57:14.593294595 +0000 UTC m=+208.246759754 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls") pod "telemeter-client-647df5cfcf-7dtwq" (UID: "06eeb16c-c683-4bfe-b243-df34da90042b") : secret "telemeter-client-tls" not found Mar 13 12:56:58.640983 master-0 kubenswrapper[28149]: I0313 12:56:58.640886 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:56:59.397629 master-0 kubenswrapper[28149]: I0313 12:56:59.397498 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-5lklz"] Mar 13 12:56:59.405551 master-0 kubenswrapper[28149]: W0313 12:56:59.405501 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod226f8faa_22b5_465f_892c_3a7541177046.slice/crio-0f9120f7acc2dde94f2924eed20bce5155872194bf71bf425e9916e223f45fd5 WatchSource:0}: Error finding container 0f9120f7acc2dde94f2924eed20bce5155872194bf71bf425e9916e223f45fd5: Status 404 returned error can't find the container with id 0f9120f7acc2dde94f2924eed20bce5155872194bf71bf425e9916e223f45fd5 Mar 13 12:56:59.784389 master-0 kubenswrapper[28149]: I0313 12:56:59.784255 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" event={"ID":"226f8faa-22b5-465f-892c-3a7541177046","Type":"ContainerStarted","Data":"0f9120f7acc2dde94f2924eed20bce5155872194bf71bf425e9916e223f45fd5"} Mar 13 12:57:01.801588 master-0 kubenswrapper[28149]: I0313 12:57:01.801527 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" event={"ID":"226f8faa-22b5-465f-892c-3a7541177046","Type":"ContainerStarted","Data":"a5c0680b3a8fbd84bba0d79fc61b17151671f366d5fb103f1a0609b633c6798a"} Mar 13 12:57:01.802052 master-0 kubenswrapper[28149]: I0313 12:57:01.801911 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:57:01.803664 master-0 kubenswrapper[28149]: I0313 12:57:01.803606 28149 patch_prober.go:28] interesting pod/console-operator-6c7fb6b958-5lklz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.91:8443/readyz\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 13 12:57:01.803741 master-0 kubenswrapper[28149]: I0313 12:57:01.803695 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" podUID="226f8faa-22b5-465f-892c-3a7541177046" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.91:8443/readyz\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 13 12:57:02.582159 master-0 kubenswrapper[28149]: I0313 12:57:02.582088 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:57:02.582159 master-0 kubenswrapper[28149]: I0313 12:57:02.582157 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:57:02.810638 master-0 kubenswrapper[28149]: I0313 12:57:02.810578 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-5lklz_226f8faa-22b5-465f-892c-3a7541177046/console-operator/0.log" Mar 13 12:57:02.810638 master-0 kubenswrapper[28149]: I0313 12:57:02.810636 28149 generic.go:334] "Generic (PLEG): container finished" podID="226f8faa-22b5-465f-892c-3a7541177046" containerID="a5c0680b3a8fbd84bba0d79fc61b17151671f366d5fb103f1a0609b633c6798a" exitCode=255 Mar 13 12:57:02.811462 master-0 kubenswrapper[28149]: I0313 12:57:02.810671 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" event={"ID":"226f8faa-22b5-465f-892c-3a7541177046","Type":"ContainerDied","Data":"a5c0680b3a8fbd84bba0d79fc61b17151671f366d5fb103f1a0609b633c6798a"} Mar 13 12:57:02.811462 master-0 kubenswrapper[28149]: I0313 12:57:02.811176 28149 scope.go:117] "RemoveContainer" containerID="a5c0680b3a8fbd84bba0d79fc61b17151671f366d5fb103f1a0609b633c6798a" Mar 13 12:57:03.821634 master-0 kubenswrapper[28149]: I0313 12:57:03.821564 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-5lklz_226f8faa-22b5-465f-892c-3a7541177046/console-operator/1.log" Mar 13 12:57:03.822824 master-0 kubenswrapper[28149]: I0313 12:57:03.822775 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-5lklz_226f8faa-22b5-465f-892c-3a7541177046/console-operator/0.log" Mar 13 12:57:03.822941 master-0 kubenswrapper[28149]: I0313 12:57:03.822865 28149 generic.go:334] "Generic (PLEG): container finished" podID="226f8faa-22b5-465f-892c-3a7541177046" containerID="2e8e6dea332cce5fafb3b3c2fe6ba7e483f4f60db9bce41369e6a273c1e228e6" exitCode=255 Mar 13 12:57:03.823049 master-0 kubenswrapper[28149]: I0313 12:57:03.822995 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" event={"ID":"226f8faa-22b5-465f-892c-3a7541177046","Type":"ContainerDied","Data":"2e8e6dea332cce5fafb3b3c2fe6ba7e483f4f60db9bce41369e6a273c1e228e6"} Mar 13 12:57:03.823189 master-0 kubenswrapper[28149]: I0313 12:57:03.823166 28149 scope.go:117] "RemoveContainer" containerID="a5c0680b3a8fbd84bba0d79fc61b17151671f366d5fb103f1a0609b633c6798a" Mar 13 12:57:03.823743 master-0 kubenswrapper[28149]: I0313 12:57:03.823706 28149 scope.go:117] "RemoveContainer" containerID="2e8e6dea332cce5fafb3b3c2fe6ba7e483f4f60db9bce41369e6a273c1e228e6" Mar 13 12:57:03.824159 master-0 kubenswrapper[28149]: E0313 12:57:03.824109 28149 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-6c7fb6b958-5lklz_openshift-console-operator(226f8faa-22b5-465f-892c-3a7541177046)\"" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" podUID="226f8faa-22b5-465f-892c-3a7541177046" Mar 13 12:57:04.836403 master-0 kubenswrapper[28149]: I0313 12:57:04.836331 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-5lklz_226f8faa-22b5-465f-892c-3a7541177046/console-operator/1.log" Mar 13 12:57:04.837397 master-0 kubenswrapper[28149]: I0313 12:57:04.837355 28149 scope.go:117] "RemoveContainer" containerID="2e8e6dea332cce5fafb3b3c2fe6ba7e483f4f60db9bce41369e6a273c1e228e6" Mar 13 12:57:04.837930 master-0 kubenswrapper[28149]: E0313 12:57:04.837859 28149 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-6c7fb6b958-5lklz_openshift-console-operator(226f8faa-22b5-465f-892c-3a7541177046)\"" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" podUID="226f8faa-22b5-465f-892c-3a7541177046" Mar 13 12:57:07.211643 master-0 kubenswrapper[28149]: I0313 12:57:07.211556 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:57:07.215389 master-0 kubenswrapper[28149]: I0313 12:57:07.215340 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.222771 master-0 kubenswrapper[28149]: I0313 12:57:07.222695 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 13 12:57:07.223304 master-0 kubenswrapper[28149]: I0313 12:57:07.223263 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 13 12:57:07.223384 master-0 kubenswrapper[28149]: I0313 12:57:07.223292 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 13 12:57:07.224395 master-0 kubenswrapper[28149]: I0313 12:57:07.224373 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 13 12:57:07.224660 master-0 kubenswrapper[28149]: I0313 12:57:07.224645 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 13 12:57:07.224908 master-0 kubenswrapper[28149]: I0313 12:57:07.224893 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 13 12:57:07.225506 master-0 kubenswrapper[28149]: I0313 12:57:07.225454 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6lsjijes9o2i6" Mar 13 12:57:07.227439 master-0 kubenswrapper[28149]: I0313 12:57:07.227399 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 13 12:57:07.227627 master-0 kubenswrapper[28149]: I0313 12:57:07.227600 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 13 12:57:07.227690 master-0 kubenswrapper[28149]: I0313 12:57:07.227657 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 13 12:57:07.230216 master-0 kubenswrapper[28149]: I0313 12:57:07.230047 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 13 12:57:07.244663 master-0 kubenswrapper[28149]: I0313 12:57:07.243549 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 13 12:57:07.263223 master-0 kubenswrapper[28149]: I0313 12:57:07.263156 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.272662 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.272715 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.272773 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-web-config\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.272797 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.272827 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-config-out\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.272853 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.272880 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.272906 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.272946 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.272969 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.272990 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.273015 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.273043 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.273091 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.273113 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-config\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.273162 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.273188 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.275166 master-0 kubenswrapper[28149]: I0313 12:57:07.273219 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqsmk\" (UniqueName: \"kubernetes.io/projected/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-kube-api-access-zqsmk\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374164 master-0 kubenswrapper[28149]: I0313 12:57:07.374097 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-web-config\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374356 master-0 kubenswrapper[28149]: I0313 12:57:07.374178 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374356 master-0 kubenswrapper[28149]: I0313 12:57:07.374204 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-config-out\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374356 master-0 kubenswrapper[28149]: I0313 12:57:07.374238 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374356 master-0 kubenswrapper[28149]: I0313 12:57:07.374269 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374356 master-0 kubenswrapper[28149]: I0313 12:57:07.374297 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374356 master-0 kubenswrapper[28149]: I0313 12:57:07.374332 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374356 master-0 kubenswrapper[28149]: I0313 12:57:07.374352 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374582 master-0 kubenswrapper[28149]: I0313 12:57:07.374372 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374582 master-0 kubenswrapper[28149]: I0313 12:57:07.374396 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374582 master-0 kubenswrapper[28149]: I0313 12:57:07.374425 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374582 master-0 kubenswrapper[28149]: I0313 12:57:07.374475 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374582 master-0 kubenswrapper[28149]: I0313 12:57:07.374498 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-config\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374582 master-0 kubenswrapper[28149]: I0313 12:57:07.374517 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374582 master-0 kubenswrapper[28149]: I0313 12:57:07.374540 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374798 master-0 kubenswrapper[28149]: I0313 12:57:07.374602 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqsmk\" (UniqueName: \"kubernetes.io/projected/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-kube-api-access-zqsmk\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374798 master-0 kubenswrapper[28149]: I0313 12:57:07.374634 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.374798 master-0 kubenswrapper[28149]: I0313 12:57:07.374656 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.375637 master-0 kubenswrapper[28149]: I0313 12:57:07.375608 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.375637 master-0 kubenswrapper[28149]: I0313 12:57:07.375623 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.375973 master-0 kubenswrapper[28149]: E0313 12:57:07.375945 28149 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 13 12:57:07.376111 master-0 kubenswrapper[28149]: E0313 12:57:07.376097 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls podName:4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:07.876079767 +0000 UTC m=+201.529544986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5") : secret "prometheus-k8s-tls" not found Mar 13 12:57:07.376752 master-0 kubenswrapper[28149]: I0313 12:57:07.376704 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.376899 master-0 kubenswrapper[28149]: I0313 12:57:07.376869 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.377267 master-0 kubenswrapper[28149]: E0313 12:57:07.377248 28149 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 12:57:07.377267 master-0 kubenswrapper[28149]: I0313 12:57:07.377256 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.377426 master-0 kubenswrapper[28149]: E0313 12:57:07.377293 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls podName:4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:07.877278809 +0000 UTC m=+201.530744048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 12:57:07.379638 master-0 kubenswrapper[28149]: I0313 12:57:07.379599 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.379913 master-0 kubenswrapper[28149]: I0313 12:57:07.379889 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.380116 master-0 kubenswrapper[28149]: I0313 12:57:07.380085 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-web-config\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.380357 master-0 kubenswrapper[28149]: I0313 12:57:07.380326 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.380585 master-0 kubenswrapper[28149]: I0313 12:57:07.380558 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-config\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.380757 master-0 kubenswrapper[28149]: I0313 12:57:07.380732 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-config-out\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.383063 master-0 kubenswrapper[28149]: I0313 12:57:07.383024 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.387639 master-0 kubenswrapper[28149]: I0313 12:57:07.387579 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.388557 master-0 kubenswrapper[28149]: I0313 12:57:07.388178 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.388557 master-0 kubenswrapper[28149]: I0313 12:57:07.388391 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.394626 master-0 kubenswrapper[28149]: I0313 12:57:07.394573 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqsmk\" (UniqueName: \"kubernetes.io/projected/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-kube-api-access-zqsmk\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.960412 master-0 kubenswrapper[28149]: I0313 12:57:07.960356 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.960633 master-0 kubenswrapper[28149]: E0313 12:57:07.960576 28149 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 13 12:57:07.960692 master-0 kubenswrapper[28149]: I0313 12:57:07.960654 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:07.960805 master-0 kubenswrapper[28149]: E0313 12:57:07.960678 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls podName:4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:08.960655236 +0000 UTC m=+202.614120465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5") : secret "prometheus-k8s-tls" not found Mar 13 12:57:07.960870 master-0 kubenswrapper[28149]: E0313 12:57:07.960811 28149 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 12:57:07.960905 master-0 kubenswrapper[28149]: E0313 12:57:07.960876 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls podName:4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:08.960856701 +0000 UTC m=+202.614321860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 12:57:08.642105 master-0 kubenswrapper[28149]: I0313 12:57:08.642022 28149 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:57:08.642105 master-0 kubenswrapper[28149]: I0313 12:57:08.642084 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:57:08.642863 master-0 kubenswrapper[28149]: I0313 12:57:08.642756 28149 scope.go:117] "RemoveContainer" containerID="2e8e6dea332cce5fafb3b3c2fe6ba7e483f4f60db9bce41369e6a273c1e228e6" Mar 13 12:57:08.643040 master-0 kubenswrapper[28149]: E0313 12:57:08.643004 28149 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-6c7fb6b958-5lklz_openshift-console-operator(226f8faa-22b5-465f-892c-3a7541177046)\"" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" podUID="226f8faa-22b5-465f-892c-3a7541177046" Mar 13 12:57:09.014693 master-0 kubenswrapper[28149]: I0313 12:57:09.014582 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:09.014870 master-0 kubenswrapper[28149]: I0313 12:57:09.014694 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:09.014912 master-0 kubenswrapper[28149]: E0313 12:57:09.014862 28149 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 13 12:57:09.014962 master-0 kubenswrapper[28149]: E0313 12:57:09.014938 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls podName:4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:11.014921562 +0000 UTC m=+204.668386721 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5") : secret "prometheus-k8s-tls" not found Mar 13 12:57:09.015160 master-0 kubenswrapper[28149]: E0313 12:57:09.015102 28149 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 12:57:09.015246 master-0 kubenswrapper[28149]: E0313 12:57:09.015223 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls podName:4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:11.015197679 +0000 UTC m=+204.668662918 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 12:57:11.138249 master-0 kubenswrapper[28149]: I0313 12:57:11.135324 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:11.138249 master-0 kubenswrapper[28149]: I0313 12:57:11.135512 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:11.138249 master-0 kubenswrapper[28149]: E0313 12:57:11.135755 28149 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 12:57:11.138249 master-0 kubenswrapper[28149]: E0313 12:57:11.135884 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls podName:4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:15.135850188 +0000 UTC m=+208.789315347 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 12:57:11.138249 master-0 kubenswrapper[28149]: E0313 12:57:11.136341 28149 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 13 12:57:11.138249 master-0 kubenswrapper[28149]: E0313 12:57:11.136565 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls podName:4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:15.136434034 +0000 UTC m=+208.789899193 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5") : secret "prometheus-k8s-tls" not found Mar 13 12:57:14.686642 master-0 kubenswrapper[28149]: I0313 12:57:14.686586 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:57:14.687259 master-0 kubenswrapper[28149]: E0313 12:57:14.686755 28149 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 13 12:57:14.687259 master-0 kubenswrapper[28149]: E0313 12:57:14.686838 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls podName:06eeb16c-c683-4bfe-b243-df34da90042b nodeName:}" failed. No retries permitted until 2026-03-13 12:57:46.686819381 +0000 UTC m=+240.340284540 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls") pod "telemeter-client-647df5cfcf-7dtwq" (UID: "06eeb16c-c683-4bfe-b243-df34da90042b") : secret "telemeter-client-tls" not found Mar 13 12:57:15.194703 master-0 kubenswrapper[28149]: I0313 12:57:15.194624 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:15.195267 master-0 kubenswrapper[28149]: I0313 12:57:15.194881 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:15.195267 master-0 kubenswrapper[28149]: E0313 12:57:15.195068 28149 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 12:57:15.195267 master-0 kubenswrapper[28149]: E0313 12:57:15.195092 28149 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 13 12:57:15.195400 master-0 kubenswrapper[28149]: E0313 12:57:15.195195 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls podName:4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:23.195168943 +0000 UTC m=+216.848634102 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 12:57:15.195451 master-0 kubenswrapper[28149]: E0313 12:57:15.195405 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls podName:4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:23.195375428 +0000 UTC m=+216.848840697 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5") : secret "prometheus-k8s-tls" not found Mar 13 12:57:19.686769 master-0 kubenswrapper[28149]: I0313 12:57:19.686724 28149 scope.go:117] "RemoveContainer" containerID="2e8e6dea332cce5fafb3b3c2fe6ba7e483f4f60db9bce41369e6a273c1e228e6" Mar 13 12:57:20.599278 master-0 kubenswrapper[28149]: I0313 12:57:20.599240 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-5lklz_226f8faa-22b5-465f-892c-3a7541177046/console-operator/2.log" Mar 13 12:57:20.599823 master-0 kubenswrapper[28149]: I0313 12:57:20.599779 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-5lklz_226f8faa-22b5-465f-892c-3a7541177046/console-operator/1.log" Mar 13 12:57:20.599892 master-0 kubenswrapper[28149]: I0313 12:57:20.599846 28149 generic.go:334] "Generic (PLEG): container finished" podID="226f8faa-22b5-465f-892c-3a7541177046" containerID="5fc79ba0490f14b8e73c2ed3e225d974f4078337597a48c21833c5e5acd642a1" exitCode=255 Mar 13 12:57:20.599935 master-0 kubenswrapper[28149]: I0313 12:57:20.599885 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" event={"ID":"226f8faa-22b5-465f-892c-3a7541177046","Type":"ContainerDied","Data":"5fc79ba0490f14b8e73c2ed3e225d974f4078337597a48c21833c5e5acd642a1"} Mar 13 12:57:20.599935 master-0 kubenswrapper[28149]: I0313 12:57:20.599928 28149 scope.go:117] "RemoveContainer" containerID="2e8e6dea332cce5fafb3b3c2fe6ba7e483f4f60db9bce41369e6a273c1e228e6" Mar 13 12:57:20.600518 master-0 kubenswrapper[28149]: I0313 12:57:20.600489 28149 scope.go:117] "RemoveContainer" containerID="5fc79ba0490f14b8e73c2ed3e225d974f4078337597a48c21833c5e5acd642a1" Mar 13 12:57:20.600747 master-0 kubenswrapper[28149]: E0313 12:57:20.600714 28149 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=console-operator pod=console-operator-6c7fb6b958-5lklz_openshift-console-operator(226f8faa-22b5-465f-892c-3a7541177046)\"" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" podUID="226f8faa-22b5-465f-892c-3a7541177046" Mar 13 12:57:21.621047 master-0 kubenswrapper[28149]: I0313 12:57:21.620979 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-5lklz_226f8faa-22b5-465f-892c-3a7541177046/console-operator/2.log" Mar 13 12:57:22.597357 master-0 kubenswrapper[28149]: I0313 12:57:22.597309 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:57:22.601273 master-0 kubenswrapper[28149]: I0313 12:57:22.601233 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" Mar 13 12:57:22.651724 master-0 kubenswrapper[28149]: I0313 12:57:22.651674 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:57:22.654381 master-0 kubenswrapper[28149]: I0313 12:57:22.654342 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:22.657629 master-0 kubenswrapper[28149]: I0313 12:57:22.657534 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 13 12:57:22.657835 master-0 kubenswrapper[28149]: I0313 12:57:22.657667 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 13 12:57:22.657835 master-0 kubenswrapper[28149]: I0313 12:57:22.657751 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 13 12:57:22.657835 master-0 kubenswrapper[28149]: I0313 12:57:22.657765 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 13 12:57:22.658163 master-0 kubenswrapper[28149]: I0313 12:57:22.658120 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 13 12:57:22.658256 master-0 kubenswrapper[28149]: I0313 12:57:22.658184 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 13 12:57:22.658304 master-0 kubenswrapper[28149]: I0313 12:57:22.658265 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 13 12:57:22.813693 master-0 kubenswrapper[28149]: I0313 12:57:22.812809 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-rfzgn" Mar 13 12:57:22.814518 master-0 kubenswrapper[28149]: I0313 12:57:22.814315 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 13 12:57:22.901541 master-0 kubenswrapper[28149]: I0313 12:57:22.901427 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:57:22.916393 master-0 kubenswrapper[28149]: I0313 12:57:22.915619 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2f3bb9a1-578c-424d-8610-272c76bf0a31-config-out\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:22.916393 master-0 kubenswrapper[28149]: I0313 12:57:22.915678 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-config-volume\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:22.916393 master-0 kubenswrapper[28149]: I0313 12:57:22.915700 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-web-config\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:22.916393 master-0 kubenswrapper[28149]: I0313 12:57:22.915717 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:22.916393 master-0 kubenswrapper[28149]: I0313 12:57:22.916165 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:22.916786 master-0 kubenswrapper[28149]: I0313 12:57:22.916539 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:22.916786 master-0 kubenswrapper[28149]: I0313 12:57:22.916604 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2f3bb9a1-578c-424d-8610-272c76bf0a31-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:22.916786 master-0 kubenswrapper[28149]: I0313 12:57:22.916625 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmxgv\" (UniqueName: \"kubernetes.io/projected/2f3bb9a1-578c-424d-8610-272c76bf0a31-kube-api-access-pmxgv\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:22.921153 master-0 kubenswrapper[28149]: I0313 12:57:22.918352 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2f3bb9a1-578c-424d-8610-272c76bf0a31-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:22.921153 master-0 kubenswrapper[28149]: I0313 12:57:22.918422 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2f3bb9a1-578c-424d-8610-272c76bf0a31-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:22.921153 master-0 kubenswrapper[28149]: I0313 12:57:22.918504 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f3bb9a1-578c-424d-8610-272c76bf0a31-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:22.921153 master-0 kubenswrapper[28149]: I0313 12:57:22.918566 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.020388 master-0 kubenswrapper[28149]: I0313 12:57:23.020330 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.020600 master-0 kubenswrapper[28149]: I0313 12:57:23.020398 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2f3bb9a1-578c-424d-8610-272c76bf0a31-config-out\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.021091 master-0 kubenswrapper[28149]: I0313 12:57:23.021026 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-config-volume\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.021181 master-0 kubenswrapper[28149]: I0313 12:57:23.021094 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-web-config\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.021181 master-0 kubenswrapper[28149]: I0313 12:57:23.021116 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.021276 master-0 kubenswrapper[28149]: I0313 12:57:23.021182 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.021276 master-0 kubenswrapper[28149]: I0313 12:57:23.021225 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.021276 master-0 kubenswrapper[28149]: I0313 12:57:23.021257 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2f3bb9a1-578c-424d-8610-272c76bf0a31-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.021409 master-0 kubenswrapper[28149]: I0313 12:57:23.021284 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmxgv\" (UniqueName: \"kubernetes.io/projected/2f3bb9a1-578c-424d-8610-272c76bf0a31-kube-api-access-pmxgv\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.021409 master-0 kubenswrapper[28149]: I0313 12:57:23.021320 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2f3bb9a1-578c-424d-8610-272c76bf0a31-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.021409 master-0 kubenswrapper[28149]: I0313 12:57:23.021375 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2f3bb9a1-578c-424d-8610-272c76bf0a31-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.021555 master-0 kubenswrapper[28149]: I0313 12:57:23.021432 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f3bb9a1-578c-424d-8610-272c76bf0a31-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.022206 master-0 kubenswrapper[28149]: I0313 12:57:23.022092 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2f3bb9a1-578c-424d-8610-272c76bf0a31-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.022283 master-0 kubenswrapper[28149]: E0313 12:57:23.022235 28149 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 13 12:57:23.022349 master-0 kubenswrapper[28149]: E0313 12:57:23.022327 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls podName:2f3bb9a1-578c-424d-8610-272c76bf0a31 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:23.522290093 +0000 UTC m=+217.175755322 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31") : secret "alertmanager-main-tls" not found Mar 13 12:57:23.023065 master-0 kubenswrapper[28149]: I0313 12:57:23.022670 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f3bb9a1-578c-424d-8610-272c76bf0a31-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.023787 master-0 kubenswrapper[28149]: I0313 12:57:23.023764 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2f3bb9a1-578c-424d-8610-272c76bf0a31-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.024485 master-0 kubenswrapper[28149]: I0313 12:57:23.024455 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.024611 master-0 kubenswrapper[28149]: I0313 12:57:23.024582 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.025184 master-0 kubenswrapper[28149]: I0313 12:57:23.025115 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2f3bb9a1-578c-424d-8610-272c76bf0a31-config-out\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.025274 master-0 kubenswrapper[28149]: I0313 12:57:23.025229 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.025535 master-0 kubenswrapper[28149]: I0313 12:57:23.025497 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-config-volume\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.025619 master-0 kubenswrapper[28149]: I0313 12:57:23.025592 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2f3bb9a1-578c-424d-8610-272c76bf0a31-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.026648 master-0 kubenswrapper[28149]: I0313 12:57:23.026599 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-web-config\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.041517 master-0 kubenswrapper[28149]: I0313 12:57:23.041422 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmxgv\" (UniqueName: \"kubernetes.io/projected/2f3bb9a1-578c-424d-8610-272c76bf0a31-kube-api-access-pmxgv\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.224870 master-0 kubenswrapper[28149]: I0313 12:57:23.224718 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:23.224870 master-0 kubenswrapper[28149]: I0313 12:57:23.224838 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:23.225189 master-0 kubenswrapper[28149]: E0313 12:57:23.224888 28149 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 13 12:57:23.225189 master-0 kubenswrapper[28149]: E0313 12:57:23.224957 28149 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 12:57:23.225189 master-0 kubenswrapper[28149]: E0313 12:57:23.224973 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls podName:4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:39.224938807 +0000 UTC m=+232.878403966 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5") : secret "prometheus-k8s-tls" not found Mar 13 12:57:23.225189 master-0 kubenswrapper[28149]: E0313 12:57:23.224996 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls podName:4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:39.224983108 +0000 UTC m=+232.878448267 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 12:57:23.528539 master-0 kubenswrapper[28149]: I0313 12:57:23.528456 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:23.528786 master-0 kubenswrapper[28149]: E0313 12:57:23.528722 28149 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 13 12:57:23.528845 master-0 kubenswrapper[28149]: E0313 12:57:23.528818 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls podName:2f3bb9a1-578c-424d-8610-272c76bf0a31 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:24.528799366 +0000 UTC m=+218.182264525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31") : secret "alertmanager-main-tls" not found Mar 13 12:57:24.627089 master-0 kubenswrapper[28149]: I0313 12:57:24.627011 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:24.627793 master-0 kubenswrapper[28149]: E0313 12:57:24.627274 28149 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 13 12:57:24.627793 master-0 kubenswrapper[28149]: E0313 12:57:24.627371 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls podName:2f3bb9a1-578c-424d-8610-272c76bf0a31 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:26.627349016 +0000 UTC m=+220.280814175 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31") : secret "alertmanager-main-tls" not found Mar 13 12:57:26.708416 master-0 kubenswrapper[28149]: I0313 12:57:26.708332 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:26.708929 master-0 kubenswrapper[28149]: E0313 12:57:26.708508 28149 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 13 12:57:26.708929 master-0 kubenswrapper[28149]: E0313 12:57:26.708571 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls podName:2f3bb9a1-578c-424d-8610-272c76bf0a31 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:30.708555541 +0000 UTC m=+224.362020700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31") : secret "alertmanager-main-tls" not found Mar 13 12:57:28.688192 master-0 kubenswrapper[28149]: I0313 12:57:28.642235 28149 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:57:28.688192 master-0 kubenswrapper[28149]: I0313 12:57:28.687888 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:57:28.688779 master-0 kubenswrapper[28149]: I0313 12:57:28.688738 28149 scope.go:117] "RemoveContainer" containerID="5fc79ba0490f14b8e73c2ed3e225d974f4078337597a48c21833c5e5acd642a1" Mar 13 12:57:28.689097 master-0 kubenswrapper[28149]: E0313 12:57:28.689038 28149 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=console-operator pod=console-operator-6c7fb6b958-5lklz_openshift-console-operator(226f8faa-22b5-465f-892c-3a7541177046)\"" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" podUID="226f8faa-22b5-465f-892c-3a7541177046" Mar 13 12:57:30.734573 master-0 kubenswrapper[28149]: I0313 12:57:30.734486 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:30.735239 master-0 kubenswrapper[28149]: E0313 12:57:30.734580 28149 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 13 12:57:30.735239 master-0 kubenswrapper[28149]: E0313 12:57:30.734640 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls podName:2f3bb9a1-578c-424d-8610-272c76bf0a31 nodeName:}" failed. No retries permitted until 2026-03-13 12:57:38.734620056 +0000 UTC m=+232.388085285 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31") : secret "alertmanager-main-tls" not found Mar 13 12:57:36.580955 master-0 kubenswrapper[28149]: I0313 12:57:36.580898 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-78dfd69c87-v4skr"] Mar 13 12:57:36.581948 master-0 kubenswrapper[28149]: I0313 12:57:36.581911 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-78dfd69c87-v4skr" Mar 13 12:57:36.584896 master-0 kubenswrapper[28149]: I0313 12:57:36.584855 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 13 12:57:36.587101 master-0 kubenswrapper[28149]: I0313 12:57:36.587068 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6r8cg" Mar 13 12:57:36.602920 master-0 kubenswrapper[28149]: I0313 12:57:36.602859 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-78dfd69c87-v4skr"] Mar 13 12:57:36.816164 master-0 kubenswrapper[28149]: I0313 12:57:36.815961 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9874121b-8070-49dc-b0cb-5f1a2c82e526-monitoring-plugin-cert\") pod \"monitoring-plugin-78dfd69c87-v4skr\" (UID: \"9874121b-8070-49dc-b0cb-5f1a2c82e526\") " pod="openshift-monitoring/monitoring-plugin-78dfd69c87-v4skr" Mar 13 12:57:36.919304 master-0 kubenswrapper[28149]: I0313 12:57:36.919187 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9874121b-8070-49dc-b0cb-5f1a2c82e526-monitoring-plugin-cert\") pod \"monitoring-plugin-78dfd69c87-v4skr\" (UID: \"9874121b-8070-49dc-b0cb-5f1a2c82e526\") " pod="openshift-monitoring/monitoring-plugin-78dfd69c87-v4skr" Mar 13 12:57:36.922950 master-0 kubenswrapper[28149]: I0313 12:57:36.922911 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9874121b-8070-49dc-b0cb-5f1a2c82e526-monitoring-plugin-cert\") pod \"monitoring-plugin-78dfd69c87-v4skr\" (UID: \"9874121b-8070-49dc-b0cb-5f1a2c82e526\") " pod="openshift-monitoring/monitoring-plugin-78dfd69c87-v4skr" Mar 13 12:57:37.203451 master-0 kubenswrapper[28149]: I0313 12:57:37.203325 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-78dfd69c87-v4skr" Mar 13 12:57:37.732589 master-0 kubenswrapper[28149]: I0313 12:57:37.732538 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-78dfd69c87-v4skr"] Mar 13 12:57:37.752175 master-0 kubenswrapper[28149]: W0313 12:57:37.752113 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9874121b_8070_49dc_b0cb_5f1a2c82e526.slice/crio-12b64bb910cdeb4f3cb7ad3f118f192e781ab55e277c55038c61ea3cc513033e WatchSource:0}: Error finding container 12b64bb910cdeb4f3cb7ad3f118f192e781ab55e277c55038c61ea3cc513033e: Status 404 returned error can't find the container with id 12b64bb910cdeb4f3cb7ad3f118f192e781ab55e277c55038c61ea3cc513033e Mar 13 12:57:38.185459 master-0 kubenswrapper[28149]: I0313 12:57:38.185397 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-78dfd69c87-v4skr" event={"ID":"9874121b-8070-49dc-b0cb-5f1a2c82e526","Type":"ContainerStarted","Data":"12b64bb910cdeb4f3cb7ad3f118f192e781ab55e277c55038c61ea3cc513033e"} Mar 13 12:57:38.786385 master-0 kubenswrapper[28149]: I0313 12:57:38.786300 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:38.789495 master-0 kubenswrapper[28149]: I0313 12:57:38.789457 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:39.100377 master-0 kubenswrapper[28149]: I0313 12:57:39.099411 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:57:39.302719 master-0 kubenswrapper[28149]: I0313 12:57:39.302648 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:39.302962 master-0 kubenswrapper[28149]: I0313 12:57:39.302761 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:39.307943 master-0 kubenswrapper[28149]: I0313 12:57:39.307639 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:39.308106 master-0 kubenswrapper[28149]: I0313 12:57:39.308039 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:39.349357 master-0 kubenswrapper[28149]: I0313 12:57:39.349295 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:57:40.913535 master-0 kubenswrapper[28149]: I0313 12:57:40.913486 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:57:40.918113 master-0 kubenswrapper[28149]: W0313 12:57:40.918066 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f3bb9a1_578c_424d_8610_272c76bf0a31.slice/crio-ef4f096478c1fc09ae7c83c34b1ef6ada9eca20d18d42d3cc9925213aa1f6553 WatchSource:0}: Error finding container ef4f096478c1fc09ae7c83c34b1ef6ada9eca20d18d42d3cc9925213aa1f6553: Status 404 returned error can't find the container with id ef4f096478c1fc09ae7c83c34b1ef6ada9eca20d18d42d3cc9925213aa1f6553 Mar 13 12:57:40.987201 master-0 kubenswrapper[28149]: I0313 12:57:40.987123 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:57:40.990780 master-0 kubenswrapper[28149]: W0313 12:57:40.990716 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e6b4a1c_92b3_4b72_9f4b_1fd7814125b5.slice/crio-25e9965d9cd9530fd6c5da4faa88c4c90076637d46295cad4ad34912005995a9 WatchSource:0}: Error finding container 25e9965d9cd9530fd6c5da4faa88c4c90076637d46295cad4ad34912005995a9: Status 404 returned error can't find the container with id 25e9965d9cd9530fd6c5da4faa88c4c90076637d46295cad4ad34912005995a9 Mar 13 12:57:41.213327 master-0 kubenswrapper[28149]: I0313 12:57:41.213206 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2f3bb9a1-578c-424d-8610-272c76bf0a31","Type":"ContainerStarted","Data":"ef4f096478c1fc09ae7c83c34b1ef6ada9eca20d18d42d3cc9925213aa1f6553"} Mar 13 12:57:41.214807 master-0 kubenswrapper[28149]: I0313 12:57:41.214755 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-78dfd69c87-v4skr" event={"ID":"9874121b-8070-49dc-b0cb-5f1a2c82e526","Type":"ContainerStarted","Data":"11cabee8b80d4cd8de72816caa902b78e2e58731b64a9eede0262cd54936e298"} Mar 13 12:57:41.215085 master-0 kubenswrapper[28149]: I0313 12:57:41.215048 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-78dfd69c87-v4skr" Mar 13 12:57:41.215816 master-0 kubenswrapper[28149]: I0313 12:57:41.215789 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5","Type":"ContainerStarted","Data":"25e9965d9cd9530fd6c5da4faa88c4c90076637d46295cad4ad34912005995a9"} Mar 13 12:57:41.221495 master-0 kubenswrapper[28149]: I0313 12:57:41.221451 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-78dfd69c87-v4skr" Mar 13 12:57:41.235613 master-0 kubenswrapper[28149]: I0313 12:57:41.235542 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-78dfd69c87-v4skr" podStartSLOduration=2.439153282 podStartE2EDuration="5.235517283s" podCreationTimestamp="2026-03-13 12:57:36 +0000 UTC" firstStartedPulling="2026-03-13 12:57:37.756124443 +0000 UTC m=+231.409589602" lastFinishedPulling="2026-03-13 12:57:40.552488444 +0000 UTC m=+234.205953603" observedRunningTime="2026-03-13 12:57:41.232876842 +0000 UTC m=+234.886342001" watchObservedRunningTime="2026-03-13 12:57:41.235517283 +0000 UTC m=+234.888982442" Mar 13 12:57:41.687726 master-0 kubenswrapper[28149]: I0313 12:57:41.687663 28149 scope.go:117] "RemoveContainer" containerID="5fc79ba0490f14b8e73c2ed3e225d974f4078337597a48c21833c5e5acd642a1" Mar 13 12:57:42.229667 master-0 kubenswrapper[28149]: I0313 12:57:42.229614 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-5lklz_226f8faa-22b5-465f-892c-3a7541177046/console-operator/2.log" Mar 13 12:57:42.230218 master-0 kubenswrapper[28149]: I0313 12:57:42.229747 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" event={"ID":"226f8faa-22b5-465f-892c-3a7541177046","Type":"ContainerStarted","Data":"cb4e6450e7159d3aad016c41282886d967d0e3ff16308c554c09f1c62d35c95c"} Mar 13 12:57:42.230276 master-0 kubenswrapper[28149]: I0313 12:57:42.230224 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:57:42.233974 master-0 kubenswrapper[28149]: I0313 12:57:42.233751 28149 patch_prober.go:28] interesting pod/console-operator-6c7fb6b958-5lklz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.91:8443/readyz\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Mar 13 12:57:42.233974 master-0 kubenswrapper[28149]: I0313 12:57:42.233863 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" podUID="226f8faa-22b5-465f-892c-3a7541177046" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.91:8443/readyz\": dial tcp 10.128.0.91:8443: connect: connection refused" Mar 13 12:57:43.085166 master-0 kubenswrapper[28149]: I0313 12:57:43.071271 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" podStartSLOduration=42.835854516 podStartE2EDuration="45.071246729s" podCreationTimestamp="2026-03-13 12:56:58 +0000 UTC" firstStartedPulling="2026-03-13 12:56:59.408436764 +0000 UTC m=+193.061901923" lastFinishedPulling="2026-03-13 12:57:01.643828977 +0000 UTC m=+195.297294136" observedRunningTime="2026-03-13 12:57:01.823729104 +0000 UTC m=+195.477194263" watchObservedRunningTime="2026-03-13 12:57:43.071246729 +0000 UTC m=+236.724711908" Mar 13 12:57:43.448856 master-0 kubenswrapper[28149]: I0313 12:57:43.432543 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-6c7fb6b958-5lklz" Mar 13 12:57:44.051171 master-0 kubenswrapper[28149]: I0313 12:57:44.049379 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-84f57b9877-x5rrn"] Mar 13 12:57:44.051948 master-0 kubenswrapper[28149]: I0313 12:57:44.051885 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-x5rrn" Mar 13 12:57:44.059988 master-0 kubenswrapper[28149]: I0313 12:57:44.059832 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-tp927" Mar 13 12:57:44.077693 master-0 kubenswrapper[28149]: I0313 12:57:44.077623 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 13 12:57:44.078091 master-0 kubenswrapper[28149]: I0313 12:57:44.078026 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 13 12:57:44.078291 master-0 kubenswrapper[28149]: I0313 12:57:44.078226 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-x5rrn"] Mar 13 12:57:44.362796 master-0 kubenswrapper[28149]: I0313 12:57:44.362731 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbv7j\" (UniqueName: \"kubernetes.io/projected/08bedcd0-df1f-4f21-9cf5-9481959dd4fb-kube-api-access-wbv7j\") pod \"downloads-84f57b9877-x5rrn\" (UID: \"08bedcd0-df1f-4f21-9cf5-9481959dd4fb\") " pod="openshift-console/downloads-84f57b9877-x5rrn" Mar 13 12:57:44.464107 master-0 kubenswrapper[28149]: I0313 12:57:44.464057 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbv7j\" (UniqueName: \"kubernetes.io/projected/08bedcd0-df1f-4f21-9cf5-9481959dd4fb-kube-api-access-wbv7j\") pod \"downloads-84f57b9877-x5rrn\" (UID: \"08bedcd0-df1f-4f21-9cf5-9481959dd4fb\") " pod="openshift-console/downloads-84f57b9877-x5rrn" Mar 13 12:57:44.482092 master-0 kubenswrapper[28149]: I0313 12:57:44.482043 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbv7j\" (UniqueName: \"kubernetes.io/projected/08bedcd0-df1f-4f21-9cf5-9481959dd4fb-kube-api-access-wbv7j\") pod \"downloads-84f57b9877-x5rrn\" (UID: \"08bedcd0-df1f-4f21-9cf5-9481959dd4fb\") " pod="openshift-console/downloads-84f57b9877-x5rrn" Mar 13 12:57:44.636546 master-0 kubenswrapper[28149]: I0313 12:57:44.636499 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-x5rrn" Mar 13 12:57:45.171648 master-0 kubenswrapper[28149]: I0313 12:57:45.171600 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-x5rrn"] Mar 13 12:57:45.224576 master-0 kubenswrapper[28149]: W0313 12:57:45.224518 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08bedcd0_df1f_4f21_9cf5_9481959dd4fb.slice/crio-aeaeac91dc5dcdbfbb7dccb1a88c6cfc0b99141eb9ab2b618ced7b3db545ed42 WatchSource:0}: Error finding container aeaeac91dc5dcdbfbb7dccb1a88c6cfc0b99141eb9ab2b618ced7b3db545ed42: Status 404 returned error can't find the container with id aeaeac91dc5dcdbfbb7dccb1a88c6cfc0b99141eb9ab2b618ced7b3db545ed42 Mar 13 12:57:45.333208 master-0 kubenswrapper[28149]: I0313 12:57:45.332084 28149 generic.go:334] "Generic (PLEG): container finished" podID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerID="ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4" exitCode=0 Mar 13 12:57:45.333208 master-0 kubenswrapper[28149]: I0313 12:57:45.332200 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2f3bb9a1-578c-424d-8610-272c76bf0a31","Type":"ContainerDied","Data":"ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4"} Mar 13 12:57:45.334693 master-0 kubenswrapper[28149]: I0313 12:57:45.334674 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-x5rrn" event={"ID":"08bedcd0-df1f-4f21-9cf5-9481959dd4fb","Type":"ContainerStarted","Data":"aeaeac91dc5dcdbfbb7dccb1a88c6cfc0b99141eb9ab2b618ced7b3db545ed42"} Mar 13 12:57:45.338361 master-0 kubenswrapper[28149]: I0313 12:57:45.338243 28149 generic.go:334] "Generic (PLEG): container finished" podID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerID="00f686895227a890c5dc3f1b90164c1cae4d30c4ee1ff237e7887f18ad31bb8d" exitCode=0 Mar 13 12:57:45.338596 master-0 kubenswrapper[28149]: I0313 12:57:45.338580 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5","Type":"ContainerDied","Data":"00f686895227a890c5dc3f1b90164c1cae4d30c4ee1ff237e7887f18ad31bb8d"} Mar 13 12:57:46.768226 master-0 kubenswrapper[28149]: I0313 12:57:46.715582 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:57:46.768226 master-0 kubenswrapper[28149]: I0313 12:57:46.719522 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06eeb16c-c683-4bfe-b243-df34da90042b-telemeter-client-tls\") pod \"telemeter-client-647df5cfcf-7dtwq\" (UID: \"06eeb16c-c683-4bfe-b243-df34da90042b\") " pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:57:46.839309 master-0 kubenswrapper[28149]: I0313 12:57:46.838527 28149 scope.go:117] "RemoveContainer" containerID="c01d9a99bd192d1dcec1d6b82d10b0a4d0e1e32477c6f2dee5d3e54b144ca2b7" Mar 13 12:57:47.006445 master-0 kubenswrapper[28149]: I0313 12:57:47.006386 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" Mar 13 12:57:49.485953 master-0 kubenswrapper[28149]: I0313 12:57:49.485891 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-647df5cfcf-7dtwq"] Mar 13 12:57:52.961164 master-0 kubenswrapper[28149]: I0313 12:57:52.961089 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5b845478f4-2dqdf"] Mar 13 12:57:52.965198 master-0 kubenswrapper[28149]: I0313 12:57:52.962465 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:52.965198 master-0 kubenswrapper[28149]: I0313 12:57:52.964972 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-5d95w" Mar 13 12:57:52.965339 master-0 kubenswrapper[28149]: I0313 12:57:52.965228 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 13 12:57:52.965504 master-0 kubenswrapper[28149]: I0313 12:57:52.965478 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 13 12:57:52.967593 master-0 kubenswrapper[28149]: I0313 12:57:52.967531 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 13 12:57:52.967677 master-0 kubenswrapper[28149]: I0313 12:57:52.967613 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 13 12:57:52.967800 master-0 kubenswrapper[28149]: I0313 12:57:52.967776 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 13 12:57:53.007621 master-0 kubenswrapper[28149]: I0313 12:57:52.993769 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b845478f4-2dqdf"] Mar 13 12:57:53.020350 master-0 kubenswrapper[28149]: I0313 12:57:53.020274 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtrd5\" (UniqueName: \"kubernetes.io/projected/49abaf10-6497-4c58-8a80-1a598caa2999-kube-api-access-jtrd5\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.020543 master-0 kubenswrapper[28149]: I0313 12:57:53.020403 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-oauth-serving-cert\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.020587 master-0 kubenswrapper[28149]: I0313 12:57:53.020557 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-console-config\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.020619 master-0 kubenswrapper[28149]: I0313 12:57:53.020600 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/49abaf10-6497-4c58-8a80-1a598caa2999-console-serving-cert\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.020654 master-0 kubenswrapper[28149]: I0313 12:57:53.020633 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/49abaf10-6497-4c58-8a80-1a598caa2999-console-oauth-config\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.020743 master-0 kubenswrapper[28149]: I0313 12:57:53.020713 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-service-ca\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.123671 master-0 kubenswrapper[28149]: I0313 12:57:53.123348 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-service-ca\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.123671 master-0 kubenswrapper[28149]: I0313 12:57:53.123434 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtrd5\" (UniqueName: \"kubernetes.io/projected/49abaf10-6497-4c58-8a80-1a598caa2999-kube-api-access-jtrd5\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.123671 master-0 kubenswrapper[28149]: I0313 12:57:53.123490 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-oauth-serving-cert\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.123998 master-0 kubenswrapper[28149]: I0313 12:57:53.123714 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-console-config\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.123998 master-0 kubenswrapper[28149]: I0313 12:57:53.123805 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/49abaf10-6497-4c58-8a80-1a598caa2999-console-serving-cert\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.123998 master-0 kubenswrapper[28149]: I0313 12:57:53.123843 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/49abaf10-6497-4c58-8a80-1a598caa2999-console-oauth-config\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.124795 master-0 kubenswrapper[28149]: I0313 12:57:53.124388 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-service-ca\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.125392 master-0 kubenswrapper[28149]: I0313 12:57:53.124899 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-oauth-serving-cert\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.125392 master-0 kubenswrapper[28149]: I0313 12:57:53.125295 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-console-config\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.128976 master-0 kubenswrapper[28149]: I0313 12:57:53.128945 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/49abaf10-6497-4c58-8a80-1a598caa2999-console-oauth-config\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.148097 master-0 kubenswrapper[28149]: I0313 12:57:53.148053 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtrd5\" (UniqueName: \"kubernetes.io/projected/49abaf10-6497-4c58-8a80-1a598caa2999-kube-api-access-jtrd5\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.151857 master-0 kubenswrapper[28149]: I0313 12:57:53.151809 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/49abaf10-6497-4c58-8a80-1a598caa2999-console-serving-cert\") pod \"console-5b845478f4-2dqdf\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.303337 master-0 kubenswrapper[28149]: I0313 12:57:53.303245 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:57:53.949929 master-0 kubenswrapper[28149]: W0313 12:57:53.949875 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06eeb16c_c683_4bfe_b243_df34da90042b.slice/crio-f3a8cbfbcb1205660840b4b8ab80c43d709b280eed96076458c3c025903278b6 WatchSource:0}: Error finding container f3a8cbfbcb1205660840b4b8ab80c43d709b280eed96076458c3c025903278b6: Status 404 returned error can't find the container with id f3a8cbfbcb1205660840b4b8ab80c43d709b280eed96076458c3c025903278b6 Mar 13 12:57:54.868835 master-0 kubenswrapper[28149]: I0313 12:57:54.865548 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" event={"ID":"06eeb16c-c683-4bfe-b243-df34da90042b","Type":"ContainerStarted","Data":"f3a8cbfbcb1205660840b4b8ab80c43d709b280eed96076458c3c025903278b6"} Mar 13 12:57:54.868835 master-0 kubenswrapper[28149]: I0313 12:57:54.868788 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2f3bb9a1-578c-424d-8610-272c76bf0a31","Type":"ContainerStarted","Data":"f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259"} Mar 13 12:57:55.578215 master-0 kubenswrapper[28149]: I0313 12:57:55.576052 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b845478f4-2dqdf"] Mar 13 12:57:55.578215 master-0 kubenswrapper[28149]: W0313 12:57:55.577479 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49abaf10_6497_4c58_8a80_1a598caa2999.slice/crio-555bfeabecc2bad4f5343cf44549a480303cf45b0854c49d5053857a5b72fb72 WatchSource:0}: Error finding container 555bfeabecc2bad4f5343cf44549a480303cf45b0854c49d5053857a5b72fb72: Status 404 returned error can't find the container with id 555bfeabecc2bad4f5343cf44549a480303cf45b0854c49d5053857a5b72fb72 Mar 13 12:57:56.017638 master-0 kubenswrapper[28149]: I0313 12:57:56.015479 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b845478f4-2dqdf" event={"ID":"49abaf10-6497-4c58-8a80-1a598caa2999","Type":"ContainerStarted","Data":"555bfeabecc2bad4f5343cf44549a480303cf45b0854c49d5053857a5b72fb72"} Mar 13 12:57:56.025789 master-0 kubenswrapper[28149]: I0313 12:57:56.022816 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5","Type":"ContainerStarted","Data":"979968ca203da3fdb2f2540df77bd6489021a783195cb39241718daa02ab6141"} Mar 13 12:57:56.025789 master-0 kubenswrapper[28149]: I0313 12:57:56.022874 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5","Type":"ContainerStarted","Data":"4a5a6846548f318a60129c0ddbfaffa9fe75d590cd409fdca3fd562ed975c4a4"} Mar 13 12:57:56.044878 master-0 kubenswrapper[28149]: I0313 12:57:56.044831 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2f3bb9a1-578c-424d-8610-272c76bf0a31","Type":"ContainerStarted","Data":"5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3"} Mar 13 12:57:56.044878 master-0 kubenswrapper[28149]: I0313 12:57:56.044875 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2f3bb9a1-578c-424d-8610-272c76bf0a31","Type":"ContainerStarted","Data":"97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0"} Mar 13 12:57:57.316208 master-0 kubenswrapper[28149]: I0313 12:57:57.316161 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2f3bb9a1-578c-424d-8610-272c76bf0a31","Type":"ContainerStarted","Data":"7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047"} Mar 13 12:57:57.351157 master-0 kubenswrapper[28149]: I0313 12:57:57.337890 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5","Type":"ContainerStarted","Data":"8c257806ee764144b97286097976faa619d91e8aaf5fd5e5f387ea61f8a61b3d"} Mar 13 12:57:57.351157 master-0 kubenswrapper[28149]: I0313 12:57:57.337971 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5","Type":"ContainerStarted","Data":"45c005ee02d97cdd3fe7139faccbd052b06c1eec52072769631e675eb071ba1c"} Mar 13 12:57:58.352005 master-0 kubenswrapper[28149]: I0313 12:57:58.351944 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2f3bb9a1-578c-424d-8610-272c76bf0a31","Type":"ContainerStarted","Data":"5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091"} Mar 13 12:57:58.352005 master-0 kubenswrapper[28149]: I0313 12:57:58.352009 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2f3bb9a1-578c-424d-8610-272c76bf0a31","Type":"ContainerStarted","Data":"7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b"} Mar 13 12:57:58.359950 master-0 kubenswrapper[28149]: I0313 12:57:58.359880 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5","Type":"ContainerStarted","Data":"11577e2fa27649ff57c812aa1e5f415f0df82cf38672121d4fd2104f2ddb2c74"} Mar 13 12:57:58.360236 master-0 kubenswrapper[28149]: I0313 12:57:58.360095 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5","Type":"ContainerStarted","Data":"f4d159b63ea08ed149a2147a2b0c14196fd73509b5490955c8649e50998058a8"} Mar 13 12:57:58.404228 master-0 kubenswrapper[28149]: I0313 12:57:58.396510 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=28.586194369 podStartE2EDuration="36.39648938s" podCreationTimestamp="2026-03-13 12:57:22 +0000 UTC" firstStartedPulling="2026-03-13 12:57:40.920395734 +0000 UTC m=+234.573860893" lastFinishedPulling="2026-03-13 12:57:48.730690745 +0000 UTC m=+242.384155904" observedRunningTime="2026-03-13 12:57:58.383469243 +0000 UTC m=+252.036934412" watchObservedRunningTime="2026-03-13 12:57:58.39648938 +0000 UTC m=+252.049954539" Mar 13 12:57:58.669504 master-0 kubenswrapper[28149]: I0313 12:57:58.669117 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=38.533778588 podStartE2EDuration="51.669098382s" podCreationTimestamp="2026-03-13 12:57:07 +0000 UTC" firstStartedPulling="2026-03-13 12:57:40.992894081 +0000 UTC m=+234.646359240" lastFinishedPulling="2026-03-13 12:57:54.128213875 +0000 UTC m=+247.781679034" observedRunningTime="2026-03-13 12:57:58.666568115 +0000 UTC m=+252.320033264" watchObservedRunningTime="2026-03-13 12:57:58.669098382 +0000 UTC m=+252.322563541" Mar 13 12:57:59.349986 master-0 kubenswrapper[28149]: I0313 12:57:59.349885 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:58:01.403843 master-0 kubenswrapper[28149]: I0313 12:58:01.403618 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" event={"ID":"06eeb16c-c683-4bfe-b243-df34da90042b","Type":"ContainerStarted","Data":"73a781b95bbf4812868aceb68064d00f305bd9de875473d20946dfdbef19a3f4"} Mar 13 12:58:02.093744 master-0 kubenswrapper[28149]: I0313 12:58:02.093678 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f74cdccbf-t88kk"] Mar 13 12:58:02.094917 master-0 kubenswrapper[28149]: I0313 12:58:02.094881 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.105715 master-0 kubenswrapper[28149]: I0313 12:58:02.105597 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 13 12:58:02.261008 master-0 kubenswrapper[28149]: I0313 12:58:02.258293 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f74cdccbf-t88kk"] Mar 13 12:58:02.261008 master-0 kubenswrapper[28149]: I0313 12:58:02.260044 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-console-config\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.261008 master-0 kubenswrapper[28149]: I0313 12:58:02.260156 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-service-ca\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.261008 master-0 kubenswrapper[28149]: I0313 12:58:02.260188 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-oauth-serving-cert\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.261008 master-0 kubenswrapper[28149]: I0313 12:58:02.260215 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wdl6\" (UniqueName: \"kubernetes.io/projected/17c9d2eb-bc27-40f5-85b1-171256776322-kube-api-access-5wdl6\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.261008 master-0 kubenswrapper[28149]: I0313 12:58:02.260257 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-trusted-ca-bundle\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.261008 master-0 kubenswrapper[28149]: I0313 12:58:02.260288 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17c9d2eb-bc27-40f5-85b1-171256776322-console-serving-cert\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.261008 master-0 kubenswrapper[28149]: I0313 12:58:02.260310 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17c9d2eb-bc27-40f5-85b1-171256776322-console-oauth-config\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.363313 master-0 kubenswrapper[28149]: I0313 12:58:02.363085 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-console-config\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.363313 master-0 kubenswrapper[28149]: I0313 12:58:02.363294 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-service-ca\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.363573 master-0 kubenswrapper[28149]: I0313 12:58:02.363339 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-oauth-serving-cert\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.363676 master-0 kubenswrapper[28149]: I0313 12:58:02.363630 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wdl6\" (UniqueName: \"kubernetes.io/projected/17c9d2eb-bc27-40f5-85b1-171256776322-kube-api-access-5wdl6\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.363808 master-0 kubenswrapper[28149]: I0313 12:58:02.363785 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-trusted-ca-bundle\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.364435 master-0 kubenswrapper[28149]: I0313 12:58:02.364402 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-service-ca\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.364499 master-0 kubenswrapper[28149]: I0313 12:58:02.364463 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-console-config\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.367338 master-0 kubenswrapper[28149]: I0313 12:58:02.367293 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-oauth-serving-cert\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.368885 master-0 kubenswrapper[28149]: I0313 12:58:02.368838 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-trusted-ca-bundle\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.368966 master-0 kubenswrapper[28149]: I0313 12:58:02.363816 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17c9d2eb-bc27-40f5-85b1-171256776322-console-serving-cert\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.368966 master-0 kubenswrapper[28149]: I0313 12:58:02.368958 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17c9d2eb-bc27-40f5-85b1-171256776322-console-oauth-config\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.370538 master-0 kubenswrapper[28149]: I0313 12:58:02.370499 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17c9d2eb-bc27-40f5-85b1-171256776322-console-serving-cert\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.373874 master-0 kubenswrapper[28149]: I0313 12:58:02.373847 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17c9d2eb-bc27-40f5-85b1-171256776322-console-oauth-config\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.382821 master-0 kubenswrapper[28149]: I0313 12:58:02.382778 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wdl6\" (UniqueName: \"kubernetes.io/projected/17c9d2eb-bc27-40f5-85b1-171256776322-kube-api-access-5wdl6\") pod \"console-f74cdccbf-t88kk\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:02.621155 master-0 kubenswrapper[28149]: I0313 12:58:02.621005 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:06.759261 master-0 kubenswrapper[28149]: I0313 12:58:06.690029 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-647df5cfcf-7dtwq_06eeb16c-c683-4bfe-b243-df34da90042b/telemeter-client/0.log" Mar 13 12:58:06.759261 master-0 kubenswrapper[28149]: I0313 12:58:06.690115 28149 generic.go:334] "Generic (PLEG): container finished" podID="06eeb16c-c683-4bfe-b243-df34da90042b" containerID="73a781b95bbf4812868aceb68064d00f305bd9de875473d20946dfdbef19a3f4" exitCode=1 Mar 13 12:58:06.759261 master-0 kubenswrapper[28149]: I0313 12:58:06.696104 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" event={"ID":"06eeb16c-c683-4bfe-b243-df34da90042b","Type":"ContainerDied","Data":"73a781b95bbf4812868aceb68064d00f305bd9de875473d20946dfdbef19a3f4"} Mar 13 12:58:07.714248 master-0 kubenswrapper[28149]: I0313 12:58:07.714119 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-647df5cfcf-7dtwq_06eeb16c-c683-4bfe-b243-df34da90042b/telemeter-client/0.log" Mar 13 12:58:07.714248 master-0 kubenswrapper[28149]: I0313 12:58:07.714217 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" event={"ID":"06eeb16c-c683-4bfe-b243-df34da90042b","Type":"ContainerStarted","Data":"e42263d511b68605347d60b2d66cdb28274c813d0bea0e9a58135282b660a35b"} Mar 13 12:58:07.930170 master-0 kubenswrapper[28149]: I0313 12:58:07.930007 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f74cdccbf-t88kk"] Mar 13 12:58:08.725778 master-0 kubenswrapper[28149]: I0313 12:58:08.725715 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-647df5cfcf-7dtwq_06eeb16c-c683-4bfe-b243-df34da90042b/telemeter-client/0.log" Mar 13 12:58:08.726065 master-0 kubenswrapper[28149]: I0313 12:58:08.726041 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" event={"ID":"06eeb16c-c683-4bfe-b243-df34da90042b","Type":"ContainerStarted","Data":"07c6924fc819dc89e680e09f00f91a6323b00b365f02c6687a2847bc50fe105f"} Mar 13 12:58:08.726426 master-0 kubenswrapper[28149]: I0313 12:58:08.726389 28149 scope.go:117] "RemoveContainer" containerID="73a781b95bbf4812868aceb68064d00f305bd9de875473d20946dfdbef19a3f4" Mar 13 12:58:08.736305 master-0 kubenswrapper[28149]: I0313 12:58:08.736254 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f74cdccbf-t88kk" event={"ID":"17c9d2eb-bc27-40f5-85b1-171256776322","Type":"ContainerStarted","Data":"a365c2570eec0d1efd2f95df921ea91a62e8a7bec82ee722c2c420e4e4f9a961"} Mar 13 12:58:08.736305 master-0 kubenswrapper[28149]: I0313 12:58:08.736314 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f74cdccbf-t88kk" event={"ID":"17c9d2eb-bc27-40f5-85b1-171256776322","Type":"ContainerStarted","Data":"d8fb8b02eb6d9db0a21e4330a1e01bee3ea983e054f7f8f14f4cd2246785f26a"} Mar 13 12:58:08.787538 master-0 kubenswrapper[28149]: I0313 12:58:08.787427 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f74cdccbf-t88kk" podStartSLOduration=6.787403325 podStartE2EDuration="6.787403325s" podCreationTimestamp="2026-03-13 12:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:58:08.778849987 +0000 UTC m=+262.432315156" watchObservedRunningTime="2026-03-13 12:58:08.787403325 +0000 UTC m=+262.440868514" Mar 13 12:58:09.442631 master-0 kubenswrapper[28149]: I0313 12:58:09.440902 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-86477f577f-glgzr"] Mar 13 12:58:09.807983 master-0 kubenswrapper[28149]: I0313 12:58:09.807939 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-647df5cfcf-7dtwq_06eeb16c-c683-4bfe-b243-df34da90042b/telemeter-client/0.log" Mar 13 12:58:09.808283 master-0 kubenswrapper[28149]: I0313 12:58:09.808008 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" event={"ID":"06eeb16c-c683-4bfe-b243-df34da90042b","Type":"ContainerStarted","Data":"b6a4eae8407176225b13373549564b20ee927762bb94302ed1c84ab444b19d52"} Mar 13 12:58:09.812161 master-0 kubenswrapper[28149]: I0313 12:58:09.810842 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b845478f4-2dqdf" event={"ID":"49abaf10-6497-4c58-8a80-1a598caa2999","Type":"ContainerStarted","Data":"e76bf3fa14ab2ec6806e9549a1b7607ccea467b27926fd4c96d6982a827c0188"} Mar 13 12:58:10.188523 master-0 kubenswrapper[28149]: I0313 12:58:10.188351 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-647df5cfcf-7dtwq" podStartSLOduration=81.520934795 podStartE2EDuration="1m28.188323546s" podCreationTimestamp="2026-03-13 12:56:42 +0000 UTC" firstStartedPulling="2026-03-13 12:57:54.076544105 +0000 UTC m=+247.730009264" lastFinishedPulling="2026-03-13 12:58:00.743932856 +0000 UTC m=+254.397398015" observedRunningTime="2026-03-13 12:58:10.179423989 +0000 UTC m=+263.832889158" watchObservedRunningTime="2026-03-13 12:58:10.188323546 +0000 UTC m=+263.841788715" Mar 13 12:58:10.220159 master-0 kubenswrapper[28149]: I0313 12:58:10.217052 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5b845478f4-2dqdf" podStartSLOduration=5.247299133 podStartE2EDuration="18.217030114s" podCreationTimestamp="2026-03-13 12:57:52 +0000 UTC" firstStartedPulling="2026-03-13 12:57:55.61049263 +0000 UTC m=+249.263957789" lastFinishedPulling="2026-03-13 12:58:08.580223611 +0000 UTC m=+262.233688770" observedRunningTime="2026-03-13 12:58:10.213118099 +0000 UTC m=+263.866583278" watchObservedRunningTime="2026-03-13 12:58:10.217030114 +0000 UTC m=+263.870495273" Mar 13 12:58:11.804891 master-0 kubenswrapper[28149]: I0313 12:58:11.804819 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b845478f4-2dqdf"] Mar 13 12:58:11.845755 master-0 kubenswrapper[28149]: I0313 12:58:11.845674 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-59dc574b9f-z4gvv"] Mar 13 12:58:11.847842 master-0 kubenswrapper[28149]: I0313 12:58:11.847196 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:11.894829 master-0 kubenswrapper[28149]: I0313 12:58:11.894731 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-oauth-serving-cert\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:11.895255 master-0 kubenswrapper[28149]: I0313 12:58:11.894867 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-service-ca\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:11.895255 master-0 kubenswrapper[28149]: I0313 12:58:11.894979 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-console-config\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:11.895255 master-0 kubenswrapper[28149]: I0313 12:58:11.895165 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-trusted-ca-bundle\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:11.895655 master-0 kubenswrapper[28149]: I0313 12:58:11.895301 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff344520-bb09-4f16-82be-273378ab0663-console-serving-cert\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:11.895655 master-0 kubenswrapper[28149]: I0313 12:58:11.895521 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ff344520-bb09-4f16-82be-273378ab0663-console-oauth-config\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:11.895655 master-0 kubenswrapper[28149]: I0313 12:58:11.895618 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwxpz\" (UniqueName: \"kubernetes.io/projected/ff344520-bb09-4f16-82be-273378ab0663-kube-api-access-xwxpz\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:11.997051 master-0 kubenswrapper[28149]: I0313 12:58:11.996968 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-console-config\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:11.997366 master-0 kubenswrapper[28149]: I0313 12:58:11.997213 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-trusted-ca-bundle\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:11.997467 master-0 kubenswrapper[28149]: I0313 12:58:11.997423 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff344520-bb09-4f16-82be-273378ab0663-console-serving-cert\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:11.997573 master-0 kubenswrapper[28149]: I0313 12:58:11.997525 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ff344520-bb09-4f16-82be-273378ab0663-console-oauth-config\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:11.997650 master-0 kubenswrapper[28149]: I0313 12:58:11.997626 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwxpz\" (UniqueName: \"kubernetes.io/projected/ff344520-bb09-4f16-82be-273378ab0663-kube-api-access-xwxpz\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:11.997762 master-0 kubenswrapper[28149]: I0313 12:58:11.997707 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-oauth-serving-cert\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:11.997818 master-0 kubenswrapper[28149]: I0313 12:58:11.997778 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-service-ca\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:11.998765 master-0 kubenswrapper[28149]: I0313 12:58:11.998727 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-console-config\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:12.002623 master-0 kubenswrapper[28149]: I0313 12:58:12.001395 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-oauth-serving-cert\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:12.002623 master-0 kubenswrapper[28149]: I0313 12:58:12.001529 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-service-ca\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:12.002623 master-0 kubenswrapper[28149]: I0313 12:58:12.001562 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-trusted-ca-bundle\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:12.015218 master-0 kubenswrapper[28149]: I0313 12:58:12.015013 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ff344520-bb09-4f16-82be-273378ab0663-console-oauth-config\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:12.047005 master-0 kubenswrapper[28149]: I0313 12:58:12.046926 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff344520-bb09-4f16-82be-273378ab0663-console-serving-cert\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:12.069481 master-0 kubenswrapper[28149]: I0313 12:58:12.059523 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59dc574b9f-z4gvv"] Mar 13 12:58:12.076962 master-0 kubenswrapper[28149]: I0313 12:58:12.076401 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwxpz\" (UniqueName: \"kubernetes.io/projected/ff344520-bb09-4f16-82be-273378ab0663-kube-api-access-xwxpz\") pod \"console-59dc574b9f-z4gvv\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:12.249603 master-0 kubenswrapper[28149]: I0313 12:58:12.249522 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:12.795616 master-0 kubenswrapper[28149]: I0313 12:58:12.795566 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:12.795870 master-0 kubenswrapper[28149]: I0313 12:58:12.795664 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:12.795870 master-0 kubenswrapper[28149]: I0313 12:58:12.795682 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:12.799992 master-0 kubenswrapper[28149]: I0313 12:58:12.799913 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:58:13.122259 master-0 kubenswrapper[28149]: I0313 12:58:13.122201 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59dc574b9f-z4gvv"] Mar 13 12:58:13.125260 master-0 kubenswrapper[28149]: W0313 12:58:13.125197 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff344520_bb09_4f16_82be_273378ab0663.slice/crio-e4cedbaf997be2d29739b8b50897aba2f0cbdfec64d433a6178ddfcc5a309295 WatchSource:0}: Error finding container e4cedbaf997be2d29739b8b50897aba2f0cbdfec64d433a6178ddfcc5a309295: Status 404 returned error can't find the container with id e4cedbaf997be2d29739b8b50897aba2f0cbdfec64d433a6178ddfcc5a309295 Mar 13 12:58:13.303699 master-0 kubenswrapper[28149]: I0313 12:58:13.303654 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:58:13.861807 master-0 kubenswrapper[28149]: I0313 12:58:13.861721 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59dc574b9f-z4gvv" event={"ID":"ff344520-bb09-4f16-82be-273378ab0663","Type":"ContainerStarted","Data":"733ed7fb689426e60593f6b467db5361babda5ddfa6ef2c0e3d7cdd1f7a25f7f"} Mar 13 12:58:13.861807 master-0 kubenswrapper[28149]: I0313 12:58:13.861814 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59dc574b9f-z4gvv" event={"ID":"ff344520-bb09-4f16-82be-273378ab0663","Type":"ContainerStarted","Data":"e4cedbaf997be2d29739b8b50897aba2f0cbdfec64d433a6178ddfcc5a309295"} Mar 13 12:58:13.889398 master-0 kubenswrapper[28149]: I0313 12:58:13.889294 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-59dc574b9f-z4gvv" podStartSLOduration=2.889275428 podStartE2EDuration="2.889275428s" podCreationTimestamp="2026-03-13 12:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:58:13.882665691 +0000 UTC m=+267.536130870" watchObservedRunningTime="2026-03-13 12:58:13.889275428 +0000 UTC m=+267.542740587" Mar 13 12:58:15.874155 master-0 kubenswrapper[28149]: I0313 12:58:15.874076 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-59dc574b9f-z4gvv"] Mar 13 12:58:15.915212 master-0 kubenswrapper[28149]: I0313 12:58:15.915158 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-675489948b-wtbzr"] Mar 13 12:58:15.916185 master-0 kubenswrapper[28149]: I0313 12:58:15.916100 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:15.931511 master-0 kubenswrapper[28149]: I0313 12:58:15.931445 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-675489948b-wtbzr"] Mar 13 12:58:16.078294 master-0 kubenswrapper[28149]: I0313 12:58:16.078234 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-service-ca\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.078520 master-0 kubenswrapper[28149]: I0313 12:58:16.078306 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-trusted-ca-bundle\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.078520 master-0 kubenswrapper[28149]: I0313 12:58:16.078331 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-oauth-serving-cert\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.078520 master-0 kubenswrapper[28149]: I0313 12:58:16.078360 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h4jl\" (UniqueName: \"kubernetes.io/projected/3f8a5f1b-3890-40cb-9c51-72d9b40142de-kube-api-access-2h4jl\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.078520 master-0 kubenswrapper[28149]: I0313 12:58:16.078385 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-config\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.078520 master-0 kubenswrapper[28149]: I0313 12:58:16.078408 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-oauth-config\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.078520 master-0 kubenswrapper[28149]: I0313 12:58:16.078429 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-serving-cert\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.180242 master-0 kubenswrapper[28149]: I0313 12:58:16.179867 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-service-ca\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.180242 master-0 kubenswrapper[28149]: I0313 12:58:16.179954 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-trusted-ca-bundle\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.180242 master-0 kubenswrapper[28149]: I0313 12:58:16.179978 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-oauth-serving-cert\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.180242 master-0 kubenswrapper[28149]: I0313 12:58:16.180007 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h4jl\" (UniqueName: \"kubernetes.io/projected/3f8a5f1b-3890-40cb-9c51-72d9b40142de-kube-api-access-2h4jl\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.180242 master-0 kubenswrapper[28149]: I0313 12:58:16.180037 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-config\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.180242 master-0 kubenswrapper[28149]: I0313 12:58:16.180063 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-oauth-config\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.180242 master-0 kubenswrapper[28149]: I0313 12:58:16.180081 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-serving-cert\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.188217 master-0 kubenswrapper[28149]: I0313 12:58:16.181705 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-oauth-serving-cert\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.188217 master-0 kubenswrapper[28149]: I0313 12:58:16.181715 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-config\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.188217 master-0 kubenswrapper[28149]: I0313 12:58:16.183690 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-service-ca\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.188217 master-0 kubenswrapper[28149]: I0313 12:58:16.183814 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-serving-cert\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.188217 master-0 kubenswrapper[28149]: I0313 12:58:16.184376 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-trusted-ca-bundle\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.188217 master-0 kubenswrapper[28149]: I0313 12:58:16.185967 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-oauth-config\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.200989 master-0 kubenswrapper[28149]: I0313 12:58:16.200915 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h4jl\" (UniqueName: \"kubernetes.io/projected/3f8a5f1b-3890-40cb-9c51-72d9b40142de-kube-api-access-2h4jl\") pod \"console-675489948b-wtbzr\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:16.245165 master-0 kubenswrapper[28149]: I0313 12:58:16.245085 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:17.248060 master-0 kubenswrapper[28149]: I0313 12:58:17.247776 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-675489948b-wtbzr"] Mar 13 12:58:17.917962 master-0 kubenswrapper[28149]: I0313 12:58:17.917874 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-675489948b-wtbzr" event={"ID":"3f8a5f1b-3890-40cb-9c51-72d9b40142de","Type":"ContainerStarted","Data":"64b6f3a87fabc7ac034f8baffba41d793ff43b949fb9a245536d3be2fe4fe012"} Mar 13 12:58:17.917962 master-0 kubenswrapper[28149]: I0313 12:58:17.917936 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-675489948b-wtbzr" event={"ID":"3f8a5f1b-3890-40cb-9c51-72d9b40142de","Type":"ContainerStarted","Data":"b29acac6f7c6baac69f815e3eb7b78ced1d1992717ad6bf43b68f0f6df3ee3f8"} Mar 13 12:58:17.958984 master-0 kubenswrapper[28149]: I0313 12:58:17.958870 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-675489948b-wtbzr" podStartSLOduration=2.958841634 podStartE2EDuration="2.958841634s" podCreationTimestamp="2026-03-13 12:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:58:17.938547363 +0000 UTC m=+271.592012552" watchObservedRunningTime="2026-03-13 12:58:17.958841634 +0000 UTC m=+271.612306873" Mar 13 12:58:22.250440 master-0 kubenswrapper[28149]: I0313 12:58:22.250147 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:25.835282 master-0 kubenswrapper[28149]: I0313 12:58:25.835217 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh"] Mar 13 12:58:25.835978 master-0 kubenswrapper[28149]: I0313 12:58:25.835540 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" podUID="a454234a-6c8e-4916-81e8-c9e66cec9d31" containerName="controller-manager" containerID="cri-o://338937b0ebb757bdee738361c73af8d323aeef4fa0eb7edfc9e3a14cb3dcc3f8" gracePeriod=30 Mar 13 12:58:25.852540 master-0 kubenswrapper[28149]: I0313 12:58:25.852296 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw"] Mar 13 12:58:25.860743 master-0 kubenswrapper[28149]: I0313 12:58:25.860481 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" podUID="18ffa620-dacc-4b09-be04-2c325f860813" containerName="route-controller-manager" containerID="cri-o://e08ebb9b72b3d839ad590a0420d611fa422a407a310320bdb128182aa8a60b33" gracePeriod=30 Mar 13 12:58:26.245917 master-0 kubenswrapper[28149]: I0313 12:58:26.245798 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:26.246293 master-0 kubenswrapper[28149]: I0313 12:58:26.246246 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:58:26.247321 master-0 kubenswrapper[28149]: I0313 12:58:26.247274 28149 patch_prober.go:28] interesting pod/console-675489948b-wtbzr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:58:26.247388 master-0 kubenswrapper[28149]: I0313 12:58:26.247361 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-675489948b-wtbzr" podUID="3f8a5f1b-3890-40cb-9c51-72d9b40142de" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:58:27.278065 master-0 kubenswrapper[28149]: I0313 12:58:27.277965 28149 patch_prober.go:28] interesting pod/route-controller-manager-68c48d4f7d-k7drw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.75:8443/healthz\": dial tcp 10.128.0.75:8443: connect: connection refused" start-of-body= Mar 13 12:58:27.278749 master-0 kubenswrapper[28149]: I0313 12:58:27.278084 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" podUID="18ffa620-dacc-4b09-be04-2c325f860813" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.75:8443/healthz\": dial tcp 10.128.0.75:8443: connect: connection refused" Mar 13 12:58:27.278749 master-0 kubenswrapper[28149]: I0313 12:58:27.278392 28149 patch_prober.go:28] interesting pod/controller-manager-54c79cbfcc-cxhmh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.74:8443/healthz\": dial tcp 10.128.0.74:8443: connect: connection refused" start-of-body= Mar 13 12:58:27.278749 master-0 kubenswrapper[28149]: I0313 12:58:27.278423 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" podUID="a454234a-6c8e-4916-81e8-c9e66cec9d31" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.74:8443/healthz\": dial tcp 10.128.0.74:8443: connect: connection refused" Mar 13 12:58:34.944579 master-0 kubenswrapper[28149]: I0313 12:58:34.944237 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" podUID="28729487-1d7c-4837-961a-6cb084bf543f" containerName="oauth-openshift" containerID="cri-o://b0611dc8723d0311cc21d4c09c05c710b9bc164a943da5cca522b147cfaa1608" gracePeriod=15 Mar 13 12:58:36.246613 master-0 kubenswrapper[28149]: I0313 12:58:36.246553 28149 patch_prober.go:28] interesting pod/console-675489948b-wtbzr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:58:36.247192 master-0 kubenswrapper[28149]: I0313 12:58:36.246619 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-675489948b-wtbzr" podUID="3f8a5f1b-3890-40cb-9c51-72d9b40142de" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:58:37.190793 master-0 kubenswrapper[28149]: I0313 12:58:37.190729 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5b845478f4-2dqdf" podUID="49abaf10-6497-4c58-8a80-1a598caa2999" containerName="console" containerID="cri-o://e76bf3fa14ab2ec6806e9549a1b7607ccea467b27926fd4c96d6982a827c0188" gracePeriod=15 Mar 13 12:58:37.279034 master-0 kubenswrapper[28149]: I0313 12:58:37.278341 28149 patch_prober.go:28] interesting pod/route-controller-manager-68c48d4f7d-k7drw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.75:8443/healthz\": dial tcp 10.128.0.75:8443: connect: connection refused" start-of-body= Mar 13 12:58:37.279034 master-0 kubenswrapper[28149]: I0313 12:58:37.278509 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" podUID="18ffa620-dacc-4b09-be04-2c325f860813" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.75:8443/healthz\": dial tcp 10.128.0.75:8443: connect: connection refused" Mar 13 12:58:37.279034 master-0 kubenswrapper[28149]: I0313 12:58:37.278779 28149 patch_prober.go:28] interesting pod/controller-manager-54c79cbfcc-cxhmh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.74:8443/healthz\": dial tcp 10.128.0.74:8443: connect: connection refused" start-of-body= Mar 13 12:58:37.279034 master-0 kubenswrapper[28149]: I0313 12:58:37.278900 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" podUID="a454234a-6c8e-4916-81e8-c9e66cec9d31" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.74:8443/healthz\": dial tcp 10.128.0.74:8443: connect: connection refused" Mar 13 12:58:38.063804 master-0 kubenswrapper[28149]: I0313 12:58:38.063614 28149 patch_prober.go:28] interesting pod/oauth-openshift-86477f577f-glgzr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.85:6443/healthz\": dial tcp 10.128.0.85:6443: connect: connection refused" start-of-body= Mar 13 12:58:38.063804 master-0 kubenswrapper[28149]: I0313 12:58:38.063779 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" podUID="28729487-1d7c-4837-961a-6cb084bf543f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.85:6443/healthz\": dial tcp 10.128.0.85:6443: connect: connection refused" Mar 13 12:58:38.176006 master-0 kubenswrapper[28149]: I0313 12:58:38.175937 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-mpvnf"] Mar 13 12:58:38.177491 master-0 kubenswrapper[28149]: I0313 12:58:38.177460 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-mpvnf" Mar 13 12:58:38.184700 master-0 kubenswrapper[28149]: I0313 12:58:38.182464 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 13 12:58:38.184700 master-0 kubenswrapper[28149]: I0313 12:58:38.182685 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 13 12:58:38.185816 master-0 kubenswrapper[28149]: I0313 12:58:38.185223 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-mpvnf"] Mar 13 12:58:38.222186 master-0 kubenswrapper[28149]: I0313 12:58:38.209070 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cddccab4-91b6-4bcf-a5d3-fd8014dffda6-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-mpvnf\" (UID: \"cddccab4-91b6-4bcf-a5d3-fd8014dffda6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mpvnf" Mar 13 12:58:38.222186 master-0 kubenswrapper[28149]: I0313 12:58:38.209321 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/cddccab4-91b6-4bcf-a5d3-fd8014dffda6-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-mpvnf\" (UID: \"cddccab4-91b6-4bcf-a5d3-fd8014dffda6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mpvnf" Mar 13 12:58:38.312177 master-0 kubenswrapper[28149]: I0313 12:58:38.310637 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/cddccab4-91b6-4bcf-a5d3-fd8014dffda6-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-mpvnf\" (UID: \"cddccab4-91b6-4bcf-a5d3-fd8014dffda6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mpvnf" Mar 13 12:58:38.312177 master-0 kubenswrapper[28149]: I0313 12:58:38.310717 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cddccab4-91b6-4bcf-a5d3-fd8014dffda6-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-mpvnf\" (UID: \"cddccab4-91b6-4bcf-a5d3-fd8014dffda6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mpvnf" Mar 13 12:58:38.312177 master-0 kubenswrapper[28149]: I0313 12:58:38.311711 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cddccab4-91b6-4bcf-a5d3-fd8014dffda6-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-mpvnf\" (UID: \"cddccab4-91b6-4bcf-a5d3-fd8014dffda6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mpvnf" Mar 13 12:58:38.312863 master-0 kubenswrapper[28149]: E0313 12:58:38.312429 28149 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 13 12:58:38.312863 master-0 kubenswrapper[28149]: E0313 12:58:38.312575 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cddccab4-91b6-4bcf-a5d3-fd8014dffda6-networking-console-plugin-cert podName:cddccab4-91b6-4bcf-a5d3-fd8014dffda6 nodeName:}" failed. No retries permitted until 2026-03-13 12:58:38.812528786 +0000 UTC m=+292.465994005 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/cddccab4-91b6-4bcf-a5d3-fd8014dffda6-networking-console-plugin-cert") pod "networking-console-plugin-5cbd49d755-mpvnf" (UID: "cddccab4-91b6-4bcf-a5d3-fd8014dffda6") : secret "networking-console-plugin-cert" not found Mar 13 12:58:38.834208 master-0 kubenswrapper[28149]: I0313 12:58:38.834106 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/cddccab4-91b6-4bcf-a5d3-fd8014dffda6-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-mpvnf\" (UID: \"cddccab4-91b6-4bcf-a5d3-fd8014dffda6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mpvnf" Mar 13 12:58:38.839302 master-0 kubenswrapper[28149]: I0313 12:58:38.839237 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/cddccab4-91b6-4bcf-a5d3-fd8014dffda6-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-mpvnf\" (UID: \"cddccab4-91b6-4bcf-a5d3-fd8014dffda6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mpvnf" Mar 13 12:58:39.115439 master-0 kubenswrapper[28149]: I0313 12:58:39.115294 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-mpvnf" Mar 13 12:58:39.193673 master-0 kubenswrapper[28149]: I0313 12:58:39.193607 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-675489948b-wtbzr"] Mar 13 12:58:39.241620 master-0 kubenswrapper[28149]: I0313 12:58:39.241561 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-bc56f484d-2sbtm"] Mar 13 12:58:39.242518 master-0 kubenswrapper[28149]: I0313 12:58:39.242488 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.266014 master-0 kubenswrapper[28149]: I0313 12:58:39.265218 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bc56f484d-2sbtm"] Mar 13 12:58:39.288665 master-0 kubenswrapper[28149]: I0313 12:58:39.288609 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-config\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.289335 master-0 kubenswrapper[28149]: I0313 12:58:39.288848 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-trusted-ca-bundle\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.289335 master-0 kubenswrapper[28149]: I0313 12:58:39.288987 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-service-ca\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.289335 master-0 kubenswrapper[28149]: I0313 12:58:39.289128 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-serving-cert\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.289335 master-0 kubenswrapper[28149]: I0313 12:58:39.289178 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-oauth-config\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.289335 master-0 kubenswrapper[28149]: I0313 12:58:39.289240 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-oauth-serving-cert\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.289335 master-0 kubenswrapper[28149]: I0313 12:58:39.289273 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sscj\" (UniqueName: \"kubernetes.io/projected/14305d97-7b24-4321-bf3a-3ec79e52f6ea-kube-api-access-8sscj\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.349826 master-0 kubenswrapper[28149]: I0313 12:58:39.349667 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:58:39.391035 master-0 kubenswrapper[28149]: I0313 12:58:39.390915 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-trusted-ca-bundle\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.391463 master-0 kubenswrapper[28149]: I0313 12:58:39.391417 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-service-ca\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.391789 master-0 kubenswrapper[28149]: I0313 12:58:39.391749 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-serving-cert\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.391862 master-0 kubenswrapper[28149]: I0313 12:58:39.391810 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-oauth-config\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.391909 master-0 kubenswrapper[28149]: I0313 12:58:39.391885 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-oauth-serving-cert\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.391953 master-0 kubenswrapper[28149]: I0313 12:58:39.391938 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sscj\" (UniqueName: \"kubernetes.io/projected/14305d97-7b24-4321-bf3a-3ec79e52f6ea-kube-api-access-8sscj\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.392400 master-0 kubenswrapper[28149]: I0313 12:58:39.392372 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-trusted-ca-bundle\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.393474 master-0 kubenswrapper[28149]: I0313 12:58:39.392468 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-config\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.393474 master-0 kubenswrapper[28149]: I0313 12:58:39.392602 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-service-ca\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.393474 master-0 kubenswrapper[28149]: I0313 12:58:39.393066 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-oauth-serving-cert\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.393474 master-0 kubenswrapper[28149]: I0313 12:58:39.393207 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-config\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.424693 master-0 kubenswrapper[28149]: I0313 12:58:39.416662 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-serving-cert\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.424693 master-0 kubenswrapper[28149]: I0313 12:58:39.420721 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sscj\" (UniqueName: \"kubernetes.io/projected/14305d97-7b24-4321-bf3a-3ec79e52f6ea-kube-api-access-8sscj\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.457403 master-0 kubenswrapper[28149]: I0313 12:58:39.457351 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-oauth-config\") pod \"console-bc56f484d-2sbtm\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:39.458423 master-0 kubenswrapper[28149]: I0313 12:58:39.458371 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:58:39.586806 master-0 kubenswrapper[28149]: I0313 12:58:39.586736 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:40.007283 master-0 kubenswrapper[28149]: I0313 12:58:40.006185 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:58:40.977482 master-0 kubenswrapper[28149]: I0313 12:58:40.977296 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-59dc574b9f-z4gvv" podUID="ff344520-bb09-4f16-82be-273378ab0663" containerName="console" containerID="cri-o://733ed7fb689426e60593f6b467db5361babda5ddfa6ef2c0e3d7cdd1f7a25f7f" gracePeriod=15 Mar 13 12:58:44.750845 master-0 kubenswrapper[28149]: I0313 12:58:44.748740 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-bc56f484d-2sbtm"] Mar 13 12:58:44.779666 master-0 kubenswrapper[28149]: I0313 12:58:44.779561 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-575b4c697b-kjnzx"] Mar 13 12:58:44.781868 master-0 kubenswrapper[28149]: I0313 12:58:44.781839 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:44.797211 master-0 kubenswrapper[28149]: I0313 12:58:44.797015 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-575b4c697b-kjnzx"] Mar 13 12:58:44.912644 master-0 kubenswrapper[28149]: I0313 12:58:44.912586 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c426507-418f-4258-bef6-4206640beb3d-console-serving-cert\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:44.912974 master-0 kubenswrapper[28149]: I0313 12:58:44.912949 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c426507-418f-4258-bef6-4206640beb3d-console-oauth-config\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:44.913357 master-0 kubenswrapper[28149]: I0313 12:58:44.913285 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-oauth-serving-cert\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:44.913503 master-0 kubenswrapper[28149]: I0313 12:58:44.913484 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-console-config\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:44.913687 master-0 kubenswrapper[28149]: I0313 12:58:44.913661 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-trusted-ca-bundle\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:44.913805 master-0 kubenswrapper[28149]: I0313 12:58:44.913790 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-service-ca\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:44.913994 master-0 kubenswrapper[28149]: I0313 12:58:44.913976 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxc85\" (UniqueName: \"kubernetes.io/projected/3c426507-418f-4258-bef6-4206640beb3d-kube-api-access-sxc85\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:45.016358 master-0 kubenswrapper[28149]: I0313 12:58:45.016137 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxc85\" (UniqueName: \"kubernetes.io/projected/3c426507-418f-4258-bef6-4206640beb3d-kube-api-access-sxc85\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:45.016358 master-0 kubenswrapper[28149]: I0313 12:58:45.016330 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c426507-418f-4258-bef6-4206640beb3d-console-serving-cert\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:45.016358 master-0 kubenswrapper[28149]: I0313 12:58:45.016358 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c426507-418f-4258-bef6-4206640beb3d-console-oauth-config\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:45.016673 master-0 kubenswrapper[28149]: I0313 12:58:45.016395 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-oauth-serving-cert\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:45.016673 master-0 kubenswrapper[28149]: I0313 12:58:45.016444 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-console-config\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:45.016673 master-0 kubenswrapper[28149]: I0313 12:58:45.016512 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-trusted-ca-bundle\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:45.016673 master-0 kubenswrapper[28149]: I0313 12:58:45.016537 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-service-ca\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:45.017742 master-0 kubenswrapper[28149]: I0313 12:58:45.017607 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-service-ca\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:45.017809 master-0 kubenswrapper[28149]: I0313 12:58:45.017783 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-oauth-serving-cert\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:45.018215 master-0 kubenswrapper[28149]: I0313 12:58:45.018055 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-console-config\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:45.018723 master-0 kubenswrapper[28149]: I0313 12:58:45.018670 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-trusted-ca-bundle\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:45.020679 master-0 kubenswrapper[28149]: I0313 12:58:45.020639 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c426507-418f-4258-bef6-4206640beb3d-console-serving-cert\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:45.037677 master-0 kubenswrapper[28149]: I0313 12:58:45.037316 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c426507-418f-4258-bef6-4206640beb3d-console-oauth-config\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:45.037677 master-0 kubenswrapper[28149]: I0313 12:58:45.037582 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxc85\" (UniqueName: \"kubernetes.io/projected/3c426507-418f-4258-bef6-4206640beb3d-kube-api-access-sxc85\") pod \"console-575b4c697b-kjnzx\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:45.159499 master-0 kubenswrapper[28149]: I0313 12:58:45.131475 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:58:47.278476 master-0 kubenswrapper[28149]: I0313 12:58:47.278387 28149 patch_prober.go:28] interesting pod/controller-manager-54c79cbfcc-cxhmh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.74:8443/healthz\": dial tcp 10.128.0.74:8443: connect: connection refused" start-of-body= Mar 13 12:58:47.279443 master-0 kubenswrapper[28149]: I0313 12:58:47.278490 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" podUID="a454234a-6c8e-4916-81e8-c9e66cec9d31" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.74:8443/healthz\": dial tcp 10.128.0.74:8443: connect: connection refused" Mar 13 12:58:47.279443 master-0 kubenswrapper[28149]: I0313 12:58:47.278500 28149 patch_prober.go:28] interesting pod/route-controller-manager-68c48d4f7d-k7drw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.75:8443/healthz\": dial tcp 10.128.0.75:8443: connect: connection refused" start-of-body= Mar 13 12:58:47.279443 master-0 kubenswrapper[28149]: I0313 12:58:47.279388 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" podUID="18ffa620-dacc-4b09-be04-2c325f860813" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.75:8443/healthz\": dial tcp 10.128.0.75:8443: connect: connection refused" Mar 13 12:58:48.063706 master-0 kubenswrapper[28149]: I0313 12:58:48.063604 28149 patch_prober.go:28] interesting pod/oauth-openshift-86477f577f-glgzr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.85:6443/healthz\": dial tcp 10.128.0.85:6443: connect: connection refused" start-of-body= Mar 13 12:58:48.064552 master-0 kubenswrapper[28149]: I0313 12:58:48.063745 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" podUID="28729487-1d7c-4837-961a-6cb084bf543f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.85:6443/healthz\": dial tcp 10.128.0.85:6443: connect: connection refused" Mar 13 12:58:48.315703 master-0 kubenswrapper[28149]: I0313 12:58:48.315552 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:58:48.316246 master-0 kubenswrapper[28149]: I0313 12:58:48.315927 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="alertmanager" containerID="cri-o://f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259" gracePeriod=120 Mar 13 12:58:48.316246 master-0 kubenswrapper[28149]: I0313 12:58:48.316061 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="kube-rbac-proxy-metric" containerID="cri-o://7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b" gracePeriod=120 Mar 13 12:58:48.316246 master-0 kubenswrapper[28149]: I0313 12:58:48.316069 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="config-reloader" containerID="cri-o://97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0" gracePeriod=120 Mar 13 12:58:48.316246 master-0 kubenswrapper[28149]: I0313 12:58:48.316085 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="kube-rbac-proxy" containerID="cri-o://7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047" gracePeriod=120 Mar 13 12:58:48.316402 master-0 kubenswrapper[28149]: I0313 12:58:48.316195 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="prom-label-proxy" containerID="cri-o://5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091" gracePeriod=120 Mar 13 12:58:48.316402 master-0 kubenswrapper[28149]: I0313 12:58:48.316069 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="kube-rbac-proxy-web" containerID="cri-o://5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3" gracePeriod=120 Mar 13 12:58:48.734453 master-0 kubenswrapper[28149]: I0313 12:58:48.734339 28149 scope.go:117] "RemoveContainer" containerID="45b191ee613240af89dae5f40970afaf7896448c3e2a3a3165bd85645b5d7288" Mar 13 12:58:50.049551 master-0 kubenswrapper[28149]: I0313 12:58:50.049514 28149 scope.go:117] "RemoveContainer" containerID="ad6b6be249a4b35bc319cc0c698c9b937c8df08adaedc5da969d7d3c63154f97" Mar 13 12:58:50.367939 master-0 kubenswrapper[28149]: I0313 12:58:50.367890 28149 generic.go:334] "Generic (PLEG): container finished" podID="18ffa620-dacc-4b09-be04-2c325f860813" containerID="e08ebb9b72b3d839ad590a0420d611fa422a407a310320bdb128182aa8a60b33" exitCode=0 Mar 13 12:58:50.368059 master-0 kubenswrapper[28149]: I0313 12:58:50.367983 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" event={"ID":"18ffa620-dacc-4b09-be04-2c325f860813","Type":"ContainerDied","Data":"e08ebb9b72b3d839ad590a0420d611fa422a407a310320bdb128182aa8a60b33"} Mar 13 12:58:50.368059 master-0 kubenswrapper[28149]: I0313 12:58:50.368027 28149 scope.go:117] "RemoveContainer" containerID="bf5764c3d8fba8c40cba1931dc4f8b36f32584d349bb0fa8f02b7c483a7626de" Mar 13 12:58:50.387078 master-0 kubenswrapper[28149]: I0313 12:58:50.386998 28149 generic.go:334] "Generic (PLEG): container finished" podID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerID="5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091" exitCode=0 Mar 13 12:58:50.387078 master-0 kubenswrapper[28149]: I0313 12:58:50.387059 28149 generic.go:334] "Generic (PLEG): container finished" podID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerID="7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047" exitCode=0 Mar 13 12:58:50.387078 master-0 kubenswrapper[28149]: I0313 12:58:50.387068 28149 generic.go:334] "Generic (PLEG): container finished" podID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerID="5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3" exitCode=0 Mar 13 12:58:50.387078 master-0 kubenswrapper[28149]: I0313 12:58:50.387077 28149 generic.go:334] "Generic (PLEG): container finished" podID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerID="97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0" exitCode=0 Mar 13 12:58:50.387515 master-0 kubenswrapper[28149]: I0313 12:58:50.387149 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2f3bb9a1-578c-424d-8610-272c76bf0a31","Type":"ContainerDied","Data":"5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091"} Mar 13 12:58:50.387515 master-0 kubenswrapper[28149]: I0313 12:58:50.387193 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2f3bb9a1-578c-424d-8610-272c76bf0a31","Type":"ContainerDied","Data":"7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047"} Mar 13 12:58:50.387515 master-0 kubenswrapper[28149]: I0313 12:58:50.387208 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2f3bb9a1-578c-424d-8610-272c76bf0a31","Type":"ContainerDied","Data":"5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3"} Mar 13 12:58:50.387515 master-0 kubenswrapper[28149]: I0313 12:58:50.387218 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2f3bb9a1-578c-424d-8610-272c76bf0a31","Type":"ContainerDied","Data":"97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0"} Mar 13 12:58:50.389346 master-0 kubenswrapper[28149]: I0313 12:58:50.389266 28149 scope.go:117] "RemoveContainer" containerID="52372f90f3e518110cf1e64b9ff43ecce31d8c11b62d3766c284ad38e957707b" Mar 13 12:58:50.391291 master-0 kubenswrapper[28149]: I0313 12:58:50.391267 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b845478f4-2dqdf_49abaf10-6497-4c58-8a80-1a598caa2999/console/0.log" Mar 13 12:58:50.391402 master-0 kubenswrapper[28149]: I0313 12:58:50.391311 28149 generic.go:334] "Generic (PLEG): container finished" podID="49abaf10-6497-4c58-8a80-1a598caa2999" containerID="e76bf3fa14ab2ec6806e9549a1b7607ccea467b27926fd4c96d6982a827c0188" exitCode=2 Mar 13 12:58:50.391402 master-0 kubenswrapper[28149]: I0313 12:58:50.391364 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b845478f4-2dqdf" event={"ID":"49abaf10-6497-4c58-8a80-1a598caa2999","Type":"ContainerDied","Data":"e76bf3fa14ab2ec6806e9549a1b7607ccea467b27926fd4c96d6982a827c0188"} Mar 13 12:58:50.394500 master-0 kubenswrapper[28149]: I0313 12:58:50.394458 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-59dc574b9f-z4gvv_ff344520-bb09-4f16-82be-273378ab0663/console/0.log" Mar 13 12:58:50.394500 master-0 kubenswrapper[28149]: I0313 12:58:50.394497 28149 generic.go:334] "Generic (PLEG): container finished" podID="ff344520-bb09-4f16-82be-273378ab0663" containerID="733ed7fb689426e60593f6b467db5361babda5ddfa6ef2c0e3d7cdd1f7a25f7f" exitCode=2 Mar 13 12:58:50.394672 master-0 kubenswrapper[28149]: I0313 12:58:50.394614 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59dc574b9f-z4gvv" event={"ID":"ff344520-bb09-4f16-82be-273378ab0663","Type":"ContainerDied","Data":"733ed7fb689426e60593f6b467db5361babda5ddfa6ef2c0e3d7cdd1f7a25f7f"} Mar 13 12:58:50.397582 master-0 kubenswrapper[28149]: I0313 12:58:50.397402 28149 generic.go:334] "Generic (PLEG): container finished" podID="28729487-1d7c-4837-961a-6cb084bf543f" containerID="b0611dc8723d0311cc21d4c09c05c710b9bc164a943da5cca522b147cfaa1608" exitCode=0 Mar 13 12:58:50.397582 master-0 kubenswrapper[28149]: I0313 12:58:50.397457 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" event={"ID":"28729487-1d7c-4837-961a-6cb084bf543f","Type":"ContainerDied","Data":"b0611dc8723d0311cc21d4c09c05c710b9bc164a943da5cca522b147cfaa1608"} Mar 13 12:58:50.402202 master-0 kubenswrapper[28149]: I0313 12:58:50.401684 28149 generic.go:334] "Generic (PLEG): container finished" podID="a454234a-6c8e-4916-81e8-c9e66cec9d31" containerID="338937b0ebb757bdee738361c73af8d323aeef4fa0eb7edfc9e3a14cb3dcc3f8" exitCode=0 Mar 13 12:58:50.402202 master-0 kubenswrapper[28149]: I0313 12:58:50.401774 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" event={"ID":"a454234a-6c8e-4916-81e8-c9e66cec9d31","Type":"ContainerDied","Data":"338937b0ebb757bdee738361c73af8d323aeef4fa0eb7edfc9e3a14cb3dcc3f8"} Mar 13 12:58:50.483999 master-0 kubenswrapper[28149]: I0313 12:58:50.483929 28149 scope.go:117] "RemoveContainer" containerID="1b406ee46971e490792a19b63a98c585c578548f473b720d5b7cd5c729eda7ae" Mar 13 12:58:50.484249 master-0 kubenswrapper[28149]: I0313 12:58:50.484076 28149 scope.go:117] "RemoveContainer" containerID="f12fef74127c1c2b2f8ceb210e754cc92619ab36c1f145fe9d244f8d84cfb88c" Mar 13 12:58:50.521869 master-0 kubenswrapper[28149]: I0313 12:58:50.517838 28149 kubelet.go:1505] "Image garbage collection succeeded" Mar 13 12:58:50.933460 master-0 kubenswrapper[28149]: I0313 12:58:50.931443 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b845478f4-2dqdf_49abaf10-6497-4c58-8a80-1a598caa2999/console/0.log" Mar 13 12:58:50.933460 master-0 kubenswrapper[28149]: I0313 12:58:50.931540 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:58:51.083261 master-0 kubenswrapper[28149]: I0313 12:58:51.083203 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-service-ca\") pod \"49abaf10-6497-4c58-8a80-1a598caa2999\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " Mar 13 12:58:51.083795 master-0 kubenswrapper[28149]: I0313 12:58:51.083284 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/49abaf10-6497-4c58-8a80-1a598caa2999-console-serving-cert\") pod \"49abaf10-6497-4c58-8a80-1a598caa2999\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " Mar 13 12:58:51.083795 master-0 kubenswrapper[28149]: I0313 12:58:51.083400 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/49abaf10-6497-4c58-8a80-1a598caa2999-console-oauth-config\") pod \"49abaf10-6497-4c58-8a80-1a598caa2999\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " Mar 13 12:58:51.083795 master-0 kubenswrapper[28149]: I0313 12:58:51.083441 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtrd5\" (UniqueName: \"kubernetes.io/projected/49abaf10-6497-4c58-8a80-1a598caa2999-kube-api-access-jtrd5\") pod \"49abaf10-6497-4c58-8a80-1a598caa2999\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " Mar 13 12:58:51.083795 master-0 kubenswrapper[28149]: I0313 12:58:51.083483 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-console-config\") pod \"49abaf10-6497-4c58-8a80-1a598caa2999\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " Mar 13 12:58:51.083795 master-0 kubenswrapper[28149]: I0313 12:58:51.083501 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-oauth-serving-cert\") pod \"49abaf10-6497-4c58-8a80-1a598caa2999\" (UID: \"49abaf10-6497-4c58-8a80-1a598caa2999\") " Mar 13 12:58:51.086162 master-0 kubenswrapper[28149]: I0313 12:58:51.086095 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "49abaf10-6497-4c58-8a80-1a598caa2999" (UID: "49abaf10-6497-4c58-8a80-1a598caa2999"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.086632 master-0 kubenswrapper[28149]: I0313 12:58:51.086596 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-console-config" (OuterVolumeSpecName: "console-config") pod "49abaf10-6497-4c58-8a80-1a598caa2999" (UID: "49abaf10-6497-4c58-8a80-1a598caa2999"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.087025 master-0 kubenswrapper[28149]: I0313 12:58:51.086968 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-service-ca" (OuterVolumeSpecName: "service-ca") pod "49abaf10-6497-4c58-8a80-1a598caa2999" (UID: "49abaf10-6497-4c58-8a80-1a598caa2999"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.089829 master-0 kubenswrapper[28149]: I0313 12:58:51.089575 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49abaf10-6497-4c58-8a80-1a598caa2999-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "49abaf10-6497-4c58-8a80-1a598caa2999" (UID: "49abaf10-6497-4c58-8a80-1a598caa2999"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.095403 master-0 kubenswrapper[28149]: I0313 12:58:51.095346 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49abaf10-6497-4c58-8a80-1a598caa2999-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "49abaf10-6497-4c58-8a80-1a598caa2999" (UID: "49abaf10-6497-4c58-8a80-1a598caa2999"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.097527 master-0 kubenswrapper[28149]: I0313 12:58:51.097496 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49abaf10-6497-4c58-8a80-1a598caa2999-kube-api-access-jtrd5" (OuterVolumeSpecName: "kube-api-access-jtrd5") pod "49abaf10-6497-4c58-8a80-1a598caa2999" (UID: "49abaf10-6497-4c58-8a80-1a598caa2999"). InnerVolumeSpecName "kube-api-access-jtrd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:58:51.185085 master-0 kubenswrapper[28149]: I0313 12:58:51.185011 28149 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.185085 master-0 kubenswrapper[28149]: I0313 12:58:51.185066 28149 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/49abaf10-6497-4c58-8a80-1a598caa2999-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.185085 master-0 kubenswrapper[28149]: I0313 12:58:51.185083 28149 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/49abaf10-6497-4c58-8a80-1a598caa2999-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.185085 master-0 kubenswrapper[28149]: I0313 12:58:51.185096 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtrd5\" (UniqueName: \"kubernetes.io/projected/49abaf10-6497-4c58-8a80-1a598caa2999-kube-api-access-jtrd5\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.185524 master-0 kubenswrapper[28149]: I0313 12:58:51.185109 28149 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.185524 master-0 kubenswrapper[28149]: I0313 12:58:51.185121 28149 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/49abaf10-6497-4c58-8a80-1a598caa2999-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.323849 master-0 kubenswrapper[28149]: I0313 12:58:51.323822 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:58:51.333216 master-0 kubenswrapper[28149]: I0313 12:58:51.331612 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:58:51.341966 master-0 kubenswrapper[28149]: I0313 12:58:51.341849 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:58:51.386956 master-0 kubenswrapper[28149]: I0313 12:58:51.386897 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-service-ca\") pod \"28729487-1d7c-4837-961a-6cb084bf543f\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " Mar 13 12:58:51.387287 master-0 kubenswrapper[28149]: I0313 12:58:51.387260 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-trusted-ca-bundle\") pod \"28729487-1d7c-4837-961a-6cb084bf543f\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " Mar 13 12:58:51.387506 master-0 kubenswrapper[28149]: I0313 12:58:51.387490 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-router-certs\") pod \"28729487-1d7c-4837-961a-6cb084bf543f\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " Mar 13 12:58:51.388191 master-0 kubenswrapper[28149]: I0313 12:58:51.387655 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn8f2\" (UniqueName: \"kubernetes.io/projected/a454234a-6c8e-4916-81e8-c9e66cec9d31-kube-api-access-kn8f2\") pod \"a454234a-6c8e-4916-81e8-c9e66cec9d31\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " Mar 13 12:58:51.388350 master-0 kubenswrapper[28149]: I0313 12:58:51.388319 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8rk6\" (UniqueName: \"kubernetes.io/projected/28729487-1d7c-4837-961a-6cb084bf543f-kube-api-access-f8rk6\") pod \"28729487-1d7c-4837-961a-6cb084bf543f\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " Mar 13 12:58:51.388472 master-0 kubenswrapper[28149]: I0313 12:58:51.388457 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-login\") pod \"28729487-1d7c-4837-961a-6cb084bf543f\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " Mar 13 12:58:51.388570 master-0 kubenswrapper[28149]: I0313 12:58:51.388557 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-client-ca\") pod \"18ffa620-dacc-4b09-be04-2c325f860813\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " Mar 13 12:58:51.388677 master-0 kubenswrapper[28149]: I0313 12:58:51.388664 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ffa620-dacc-4b09-be04-2c325f860813-serving-cert\") pod \"18ffa620-dacc-4b09-be04-2c325f860813\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " Mar 13 12:58:51.388798 master-0 kubenswrapper[28149]: I0313 12:58:51.388783 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-provider-selection\") pod \"28729487-1d7c-4837-961a-6cb084bf543f\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " Mar 13 12:58:51.388894 master-0 kubenswrapper[28149]: I0313 12:58:51.387508 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "28729487-1d7c-4837-961a-6cb084bf543f" (UID: "28729487-1d7c-4837-961a-6cb084bf543f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.388969 master-0 kubenswrapper[28149]: I0313 12:58:51.388434 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "28729487-1d7c-4837-961a-6cb084bf543f" (UID: "28729487-1d7c-4837-961a-6cb084bf543f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.389093 master-0 kubenswrapper[28149]: I0313 12:58:51.388876 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-error\") pod \"28729487-1d7c-4837-961a-6cb084bf543f\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " Mar 13 12:58:51.389325 master-0 kubenswrapper[28149]: I0313 12:58:51.389303 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-proxy-ca-bundles\") pod \"a454234a-6c8e-4916-81e8-c9e66cec9d31\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " Mar 13 12:58:51.389463 master-0 kubenswrapper[28149]: I0313 12:58:51.389444 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-session\") pod \"28729487-1d7c-4837-961a-6cb084bf543f\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " Mar 13 12:58:51.389587 master-0 kubenswrapper[28149]: I0313 12:58:51.389555 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-client-ca" (OuterVolumeSpecName: "client-ca") pod "18ffa620-dacc-4b09-be04-2c325f860813" (UID: "18ffa620-dacc-4b09-be04-2c325f860813"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.389684 master-0 kubenswrapper[28149]: I0313 12:58:51.389668 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-config\") pod \"a454234a-6c8e-4916-81e8-c9e66cec9d31\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " Mar 13 12:58:51.389794 master-0 kubenswrapper[28149]: I0313 12:58:51.389782 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-audit-policies\") pod \"28729487-1d7c-4837-961a-6cb084bf543f\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " Mar 13 12:58:51.389905 master-0 kubenswrapper[28149]: I0313 12:58:51.389892 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-ocp-branding-template\") pod \"28729487-1d7c-4837-961a-6cb084bf543f\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " Mar 13 12:58:51.390133 master-0 kubenswrapper[28149]: I0313 12:58:51.390115 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmzhw\" (UniqueName: \"kubernetes.io/projected/18ffa620-dacc-4b09-be04-2c325f860813-kube-api-access-fmzhw\") pod \"18ffa620-dacc-4b09-be04-2c325f860813\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " Mar 13 12:58:51.390417 master-0 kubenswrapper[28149]: I0313 12:58:51.390400 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-serving-cert\") pod \"28729487-1d7c-4837-961a-6cb084bf543f\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " Mar 13 12:58:51.390547 master-0 kubenswrapper[28149]: I0313 12:58:51.390527 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-client-ca\") pod \"a454234a-6c8e-4916-81e8-c9e66cec9d31\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " Mar 13 12:58:51.390646 master-0 kubenswrapper[28149]: I0313 12:58:51.390631 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28729487-1d7c-4837-961a-6cb084bf543f-audit-dir\") pod \"28729487-1d7c-4837-961a-6cb084bf543f\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " Mar 13 12:58:51.390740 master-0 kubenswrapper[28149]: I0313 12:58:51.390728 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-config\") pod \"18ffa620-dacc-4b09-be04-2c325f860813\" (UID: \"18ffa620-dacc-4b09-be04-2c325f860813\") " Mar 13 12:58:51.390906 master-0 kubenswrapper[28149]: I0313 12:58:51.390877 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-cliconfig\") pod \"28729487-1d7c-4837-961a-6cb084bf543f\" (UID: \"28729487-1d7c-4837-961a-6cb084bf543f\") " Mar 13 12:58:51.391059 master-0 kubenswrapper[28149]: I0313 12:58:51.391036 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a454234a-6c8e-4916-81e8-c9e66cec9d31-serving-cert\") pod \"a454234a-6c8e-4916-81e8-c9e66cec9d31\" (UID: \"a454234a-6c8e-4916-81e8-c9e66cec9d31\") " Mar 13 12:58:51.391602 master-0 kubenswrapper[28149]: I0313 12:58:51.391582 28149 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.391693 master-0 kubenswrapper[28149]: I0313 12:58:51.391682 28149 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.391769 master-0 kubenswrapper[28149]: I0313 12:58:51.391756 28149 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.394883 master-0 kubenswrapper[28149]: I0313 12:58:51.390336 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a454234a-6c8e-4916-81e8-c9e66cec9d31" (UID: "a454234a-6c8e-4916-81e8-c9e66cec9d31"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.394883 master-0 kubenswrapper[28149]: I0313 12:58:51.390373 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-config" (OuterVolumeSpecName: "config") pod "a454234a-6c8e-4916-81e8-c9e66cec9d31" (UID: "a454234a-6c8e-4916-81e8-c9e66cec9d31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.394883 master-0 kubenswrapper[28149]: I0313 12:58:51.390765 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "28729487-1d7c-4837-961a-6cb084bf543f" (UID: "28729487-1d7c-4837-961a-6cb084bf543f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.394883 master-0 kubenswrapper[28149]: I0313 12:58:51.393407 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a454234a-6c8e-4916-81e8-c9e66cec9d31-kube-api-access-kn8f2" (OuterVolumeSpecName: "kube-api-access-kn8f2") pod "a454234a-6c8e-4916-81e8-c9e66cec9d31" (UID: "a454234a-6c8e-4916-81e8-c9e66cec9d31"). InnerVolumeSpecName "kube-api-access-kn8f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:58:51.394883 master-0 kubenswrapper[28149]: I0313 12:58:51.393448 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28729487-1d7c-4837-961a-6cb084bf543f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "28729487-1d7c-4837-961a-6cb084bf543f" (UID: "28729487-1d7c-4837-961a-6cb084bf543f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:58:51.394883 master-0 kubenswrapper[28149]: I0313 12:58:51.393764 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-client-ca" (OuterVolumeSpecName: "client-ca") pod "a454234a-6c8e-4916-81e8-c9e66cec9d31" (UID: "a454234a-6c8e-4916-81e8-c9e66cec9d31"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.394883 master-0 kubenswrapper[28149]: I0313 12:58:51.394381 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-config" (OuterVolumeSpecName: "config") pod "18ffa620-dacc-4b09-be04-2c325f860813" (UID: "18ffa620-dacc-4b09-be04-2c325f860813"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.395706 master-0 kubenswrapper[28149]: I0313 12:58:51.395499 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "28729487-1d7c-4837-961a-6cb084bf543f" (UID: "28729487-1d7c-4837-961a-6cb084bf543f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.395706 master-0 kubenswrapper[28149]: I0313 12:58:51.395612 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28729487-1d7c-4837-961a-6cb084bf543f-kube-api-access-f8rk6" (OuterVolumeSpecName: "kube-api-access-f8rk6") pod "28729487-1d7c-4837-961a-6cb084bf543f" (UID: "28729487-1d7c-4837-961a-6cb084bf543f"). InnerVolumeSpecName "kube-api-access-f8rk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:58:51.414173 master-0 kubenswrapper[28149]: I0313 12:58:51.400571 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18ffa620-dacc-4b09-be04-2c325f860813-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "18ffa620-dacc-4b09-be04-2c325f860813" (UID: "18ffa620-dacc-4b09-be04-2c325f860813"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.414173 master-0 kubenswrapper[28149]: I0313 12:58:51.402333 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "28729487-1d7c-4837-961a-6cb084bf543f" (UID: "28729487-1d7c-4837-961a-6cb084bf543f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.414173 master-0 kubenswrapper[28149]: I0313 12:58:51.412863 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "28729487-1d7c-4837-961a-6cb084bf543f" (UID: "28729487-1d7c-4837-961a-6cb084bf543f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.414173 master-0 kubenswrapper[28149]: I0313 12:58:51.413349 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "28729487-1d7c-4837-961a-6cb084bf543f" (UID: "28729487-1d7c-4837-961a-6cb084bf543f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.418619 master-0 kubenswrapper[28149]: I0313 12:58:51.418302 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "28729487-1d7c-4837-961a-6cb084bf543f" (UID: "28729487-1d7c-4837-961a-6cb084bf543f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.425488 master-0 kubenswrapper[28149]: I0313 12:58:51.425082 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "28729487-1d7c-4837-961a-6cb084bf543f" (UID: "28729487-1d7c-4837-961a-6cb084bf543f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.431876 master-0 kubenswrapper[28149]: I0313 12:58:51.426358 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ffa620-dacc-4b09-be04-2c325f860813-kube-api-access-fmzhw" (OuterVolumeSpecName: "kube-api-access-fmzhw") pod "18ffa620-dacc-4b09-be04-2c325f860813" (UID: "18ffa620-dacc-4b09-be04-2c325f860813"). InnerVolumeSpecName "kube-api-access-fmzhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:58:51.431876 master-0 kubenswrapper[28149]: I0313 12:58:51.431591 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "28729487-1d7c-4837-961a-6cb084bf543f" (UID: "28729487-1d7c-4837-961a-6cb084bf543f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.432528 master-0 kubenswrapper[28149]: I0313 12:58:51.432485 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a454234a-6c8e-4916-81e8-c9e66cec9d31-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a454234a-6c8e-4916-81e8-c9e66cec9d31" (UID: "a454234a-6c8e-4916-81e8-c9e66cec9d31"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.444245 master-0 kubenswrapper[28149]: I0313 12:58:51.441389 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "28729487-1d7c-4837-961a-6cb084bf543f" (UID: "28729487-1d7c-4837-961a-6cb084bf543f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.447276 master-0 kubenswrapper[28149]: I0313 12:58:51.447205 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:58:51.448873 master-0 kubenswrapper[28149]: I0313 12:58:51.448813 28149 generic.go:334] "Generic (PLEG): container finished" podID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerID="7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b" exitCode=0 Mar 13 12:58:51.448873 master-0 kubenswrapper[28149]: I0313 12:58:51.448860 28149 generic.go:334] "Generic (PLEG): container finished" podID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerID="f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259" exitCode=0 Mar 13 12:58:51.449002 master-0 kubenswrapper[28149]: I0313 12:58:51.448923 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2f3bb9a1-578c-424d-8610-272c76bf0a31","Type":"ContainerDied","Data":"7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b"} Mar 13 12:58:51.449002 master-0 kubenswrapper[28149]: I0313 12:58:51.448972 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2f3bb9a1-578c-424d-8610-272c76bf0a31","Type":"ContainerDied","Data":"f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259"} Mar 13 12:58:51.449002 master-0 kubenswrapper[28149]: I0313 12:58:51.448985 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2f3bb9a1-578c-424d-8610-272c76bf0a31","Type":"ContainerDied","Data":"ef4f096478c1fc09ae7c83c34b1ef6ada9eca20d18d42d3cc9925213aa1f6553"} Mar 13 12:58:51.449002 master-0 kubenswrapper[28149]: I0313 12:58:51.449001 28149 scope.go:117] "RemoveContainer" containerID="5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091" Mar 13 12:58:51.450948 master-0 kubenswrapper[28149]: I0313 12:58:51.450912 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b845478f4-2dqdf_49abaf10-6497-4c58-8a80-1a598caa2999/console/0.log" Mar 13 12:58:51.451017 master-0 kubenswrapper[28149]: I0313 12:58:51.450997 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b845478f4-2dqdf" event={"ID":"49abaf10-6497-4c58-8a80-1a598caa2999","Type":"ContainerDied","Data":"555bfeabecc2bad4f5343cf44549a480303cf45b0854c49d5053857a5b72fb72"} Mar 13 12:58:51.451077 master-0 kubenswrapper[28149]: I0313 12:58:51.451050 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b845478f4-2dqdf" Mar 13 12:58:51.456315 master-0 kubenswrapper[28149]: I0313 12:58:51.452576 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" Mar 13 12:58:51.456315 master-0 kubenswrapper[28149]: I0313 12:58:51.452575 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86477f577f-glgzr" event={"ID":"28729487-1d7c-4837-961a-6cb084bf543f","Type":"ContainerDied","Data":"b78ad4cc1e97ab6e52572a4afa95b4ec08ab81f6152613180584943e40583333"} Mar 13 12:58:51.456315 master-0 kubenswrapper[28149]: I0313 12:58:51.454232 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" Mar 13 12:58:51.456315 master-0 kubenswrapper[28149]: I0313 12:58:51.454381 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh" event={"ID":"a454234a-6c8e-4916-81e8-c9e66cec9d31","Type":"ContainerDied","Data":"02db34ef289b2a257fb361c5e1190f74ebf2b35e8d2ff6177192f08616db19aa"} Mar 13 12:58:51.457277 master-0 kubenswrapper[28149]: I0313 12:58:51.457218 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-x5rrn" event={"ID":"08bedcd0-df1f-4f21-9cf5-9481959dd4fb","Type":"ContainerStarted","Data":"b438aceb451546c310aac223a943949ad39c42f57b8d8e734616ac574db98f38"} Mar 13 12:58:51.459745 master-0 kubenswrapper[28149]: I0313 12:58:51.459712 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-84f57b9877-x5rrn" Mar 13 12:58:51.460695 master-0 kubenswrapper[28149]: I0313 12:58:51.460662 28149 patch_prober.go:28] interesting pod/downloads-84f57b9877-x5rrn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" start-of-body= Mar 13 12:58:51.460761 master-0 kubenswrapper[28149]: I0313 12:58:51.460717 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-x5rrn" podUID="08bedcd0-df1f-4f21-9cf5-9481959dd4fb" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" Mar 13 12:58:51.462012 master-0 kubenswrapper[28149]: I0313 12:58:51.461988 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" event={"ID":"18ffa620-dacc-4b09-be04-2c325f860813","Type":"ContainerDied","Data":"4923fdf0bf7675fa9b87a52fcb37d82a429121c63cdefd19c58f0e547211a622"} Mar 13 12:58:51.462079 master-0 kubenswrapper[28149]: I0313 12:58:51.462031 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw" Mar 13 12:58:51.493486 master-0 kubenswrapper[28149]: I0313 12:58:51.493423 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy\") pod \"2f3bb9a1-578c-424d-8610-272c76bf0a31\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " Mar 13 12:58:51.493590 master-0 kubenswrapper[28149]: I0313 12:58:51.493519 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls\") pod \"2f3bb9a1-578c-424d-8610-272c76bf0a31\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " Mar 13 12:58:51.493590 master-0 kubenswrapper[28149]: I0313 12:58:51.493561 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2f3bb9a1-578c-424d-8610-272c76bf0a31-metrics-client-ca\") pod \"2f3bb9a1-578c-424d-8610-272c76bf0a31\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " Mar 13 12:58:51.493671 master-0 kubenswrapper[28149]: I0313 12:58:51.493627 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy-web\") pod \"2f3bb9a1-578c-424d-8610-272c76bf0a31\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " Mar 13 12:58:51.493793 master-0 kubenswrapper[28149]: I0313 12:58:51.493679 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2f3bb9a1-578c-424d-8610-272c76bf0a31-alertmanager-main-db\") pod \"2f3bb9a1-578c-424d-8610-272c76bf0a31\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " Mar 13 12:58:51.493793 master-0 kubenswrapper[28149]: I0313 12:58:51.493782 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2f3bb9a1-578c-424d-8610-272c76bf0a31-config-out\") pod \"2f3bb9a1-578c-424d-8610-272c76bf0a31\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " Mar 13 12:58:51.493962 master-0 kubenswrapper[28149]: I0313 12:58:51.493827 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f3bb9a1-578c-424d-8610-272c76bf0a31-alertmanager-trusted-ca-bundle\") pod \"2f3bb9a1-578c-424d-8610-272c76bf0a31\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " Mar 13 12:58:51.493962 master-0 kubenswrapper[28149]: I0313 12:58:51.493866 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy-metric\") pod \"2f3bb9a1-578c-424d-8610-272c76bf0a31\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " Mar 13 12:58:51.494028 master-0 kubenswrapper[28149]: I0313 12:58:51.493902 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-config-volume\") pod \"2f3bb9a1-578c-424d-8610-272c76bf0a31\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " Mar 13 12:58:51.494061 master-0 kubenswrapper[28149]: I0313 12:58:51.494040 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmxgv\" (UniqueName: \"kubernetes.io/projected/2f3bb9a1-578c-424d-8610-272c76bf0a31-kube-api-access-pmxgv\") pod \"2f3bb9a1-578c-424d-8610-272c76bf0a31\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " Mar 13 12:58:51.494115 master-0 kubenswrapper[28149]: I0313 12:58:51.494068 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2f3bb9a1-578c-424d-8610-272c76bf0a31-tls-assets\") pod \"2f3bb9a1-578c-424d-8610-272c76bf0a31\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " Mar 13 12:58:51.494218 master-0 kubenswrapper[28149]: I0313 12:58:51.494128 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-web-config\") pod \"2f3bb9a1-578c-424d-8610-272c76bf0a31\" (UID: \"2f3bb9a1-578c-424d-8610-272c76bf0a31\") " Mar 13 12:58:51.495947 master-0 kubenswrapper[28149]: I0313 12:58:51.495405 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3bb9a1-578c-424d-8610-272c76bf0a31-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "2f3bb9a1-578c-424d-8610-272c76bf0a31" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.495947 master-0 kubenswrapper[28149]: I0313 12:58:51.495887 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3bb9a1-578c-424d-8610-272c76bf0a31-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "2f3bb9a1-578c-424d-8610-272c76bf0a31" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.499823 master-0 kubenswrapper[28149]: I0313 12:58:51.498916 28149 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.499823 master-0 kubenswrapper[28149]: I0313 12:58:51.499696 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmzhw\" (UniqueName: \"kubernetes.io/projected/18ffa620-dacc-4b09-be04-2c325f860813-kube-api-access-fmzhw\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.499823 master-0 kubenswrapper[28149]: I0313 12:58:51.499716 28149 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.499823 master-0 kubenswrapper[28149]: I0313 12:58:51.499733 28149 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.499823 master-0 kubenswrapper[28149]: I0313 12:58:51.499749 28149 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28729487-1d7c-4837-961a-6cb084bf543f-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.499823 master-0 kubenswrapper[28149]: I0313 12:58:51.499764 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ffa620-dacc-4b09-be04-2c325f860813-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.499823 master-0 kubenswrapper[28149]: I0313 12:58:51.499784 28149 reconciler_common.go:293] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f3bb9a1-578c-424d-8610-272c76bf0a31-alertmanager-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.499823 master-0 kubenswrapper[28149]: I0313 12:58:51.499795 28149 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.499823 master-0 kubenswrapper[28149]: I0313 12:58:51.499806 28149 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a454234a-6c8e-4916-81e8-c9e66cec9d31-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.499823 master-0 kubenswrapper[28149]: I0313 12:58:51.499817 28149 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.499823 master-0 kubenswrapper[28149]: I0313 12:58:51.499827 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn8f2\" (UniqueName: \"kubernetes.io/projected/a454234a-6c8e-4916-81e8-c9e66cec9d31-kube-api-access-kn8f2\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.500343 master-0 kubenswrapper[28149]: I0313 12:58:51.499843 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8rk6\" (UniqueName: \"kubernetes.io/projected/28729487-1d7c-4837-961a-6cb084bf543f-kube-api-access-f8rk6\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.500343 master-0 kubenswrapper[28149]: I0313 12:58:51.499856 28149 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.500343 master-0 kubenswrapper[28149]: I0313 12:58:51.499869 28149 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ffa620-dacc-4b09-be04-2c325f860813-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.500343 master-0 kubenswrapper[28149]: I0313 12:58:51.499882 28149 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.500343 master-0 kubenswrapper[28149]: I0313 12:58:51.499894 28149 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.500343 master-0 kubenswrapper[28149]: I0313 12:58:51.499906 28149 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.500343 master-0 kubenswrapper[28149]: I0313 12:58:51.499918 28149 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/28729487-1d7c-4837-961a-6cb084bf543f-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.500343 master-0 kubenswrapper[28149]: I0313 12:58:51.499951 28149 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2f3bb9a1-578c-424d-8610-272c76bf0a31-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.500343 master-0 kubenswrapper[28149]: I0313 12:58:51.499966 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a454234a-6c8e-4916-81e8-c9e66cec9d31-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.500343 master-0 kubenswrapper[28149]: I0313 12:58:51.499978 28149 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28729487-1d7c-4837-961a-6cb084bf543f-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.502534 master-0 kubenswrapper[28149]: I0313 12:58:51.502490 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f3bb9a1-578c-424d-8610-272c76bf0a31-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "2f3bb9a1-578c-424d-8610-272c76bf0a31" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:58:51.502637 master-0 kubenswrapper[28149]: I0313 12:58:51.502597 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-59dc574b9f-z4gvv_ff344520-bb09-4f16-82be-273378ab0663/console/0.log" Mar 13 12:58:51.502703 master-0 kubenswrapper[28149]: I0313 12:58:51.502678 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:51.508967 master-0 kubenswrapper[28149]: I0313 12:58:51.508927 28149 scope.go:117] "RemoveContainer" containerID="7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b" Mar 13 12:58:51.516983 master-0 kubenswrapper[28149]: I0313 12:58:51.516912 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-config-volume" (OuterVolumeSpecName: "config-volume") pod "2f3bb9a1-578c-424d-8610-272c76bf0a31" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.517130 master-0 kubenswrapper[28149]: I0313 12:58:51.517035 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "2f3bb9a1-578c-424d-8610-272c76bf0a31" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.517367 master-0 kubenswrapper[28149]: I0313 12:58:51.517277 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "2f3bb9a1-578c-424d-8610-272c76bf0a31" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.517967 master-0 kubenswrapper[28149]: I0313 12:58:51.517915 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "2f3bb9a1-578c-424d-8610-272c76bf0a31" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.521116 master-0 kubenswrapper[28149]: I0313 12:58:51.521035 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f3bb9a1-578c-424d-8610-272c76bf0a31-kube-api-access-pmxgv" (OuterVolumeSpecName: "kube-api-access-pmxgv") pod "2f3bb9a1-578c-424d-8610-272c76bf0a31" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31"). InnerVolumeSpecName "kube-api-access-pmxgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:58:51.525216 master-0 kubenswrapper[28149]: I0313 12:58:51.525183 28149 scope.go:117] "RemoveContainer" containerID="7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047" Mar 13 12:58:51.530913 master-0 kubenswrapper[28149]: I0313 12:58:51.530883 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f3bb9a1-578c-424d-8610-272c76bf0a31-config-out" (OuterVolumeSpecName: "config-out") pod "2f3bb9a1-578c-424d-8610-272c76bf0a31" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:58:51.531403 master-0 kubenswrapper[28149]: I0313 12:58:51.531019 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f3bb9a1-578c-424d-8610-272c76bf0a31-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "2f3bb9a1-578c-424d-8610-272c76bf0a31" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:58:51.531403 master-0 kubenswrapper[28149]: I0313 12:58:51.531336 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "2f3bb9a1-578c-424d-8610-272c76bf0a31" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.544915 master-0 kubenswrapper[28149]: I0313 12:58:51.544868 28149 scope.go:117] "RemoveContainer" containerID="5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3" Mar 13 12:58:51.561041 master-0 kubenswrapper[28149]: I0313 12:58:51.560924 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-web-config" (OuterVolumeSpecName: "web-config") pod "2f3bb9a1-578c-424d-8610-272c76bf0a31" (UID: "2f3bb9a1-578c-424d-8610-272c76bf0a31"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.565683 master-0 kubenswrapper[28149]: I0313 12:58:51.565650 28149 scope.go:117] "RemoveContainer" containerID="97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0" Mar 13 12:58:51.586323 master-0 kubenswrapper[28149]: I0313 12:58:51.586297 28149 scope.go:117] "RemoveContainer" containerID="f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259" Mar 13 12:58:51.598820 master-0 kubenswrapper[28149]: I0313 12:58:51.598789 28149 scope.go:117] "RemoveContainer" containerID="ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4" Mar 13 12:58:51.601519 master-0 kubenswrapper[28149]: I0313 12:58:51.601481 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff344520-bb09-4f16-82be-273378ab0663-console-serving-cert\") pod \"ff344520-bb09-4f16-82be-273378ab0663\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " Mar 13 12:58:51.601580 master-0 kubenswrapper[28149]: I0313 12:58:51.601541 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-oauth-serving-cert\") pod \"ff344520-bb09-4f16-82be-273378ab0663\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " Mar 13 12:58:51.601580 master-0 kubenswrapper[28149]: I0313 12:58:51.601572 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ff344520-bb09-4f16-82be-273378ab0663-console-oauth-config\") pod \"ff344520-bb09-4f16-82be-273378ab0663\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " Mar 13 12:58:51.601788 master-0 kubenswrapper[28149]: I0313 12:58:51.601650 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-trusted-ca-bundle\") pod \"ff344520-bb09-4f16-82be-273378ab0663\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " Mar 13 12:58:51.601788 master-0 kubenswrapper[28149]: I0313 12:58:51.601703 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwxpz\" (UniqueName: \"kubernetes.io/projected/ff344520-bb09-4f16-82be-273378ab0663-kube-api-access-xwxpz\") pod \"ff344520-bb09-4f16-82be-273378ab0663\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " Mar 13 12:58:51.601788 master-0 kubenswrapper[28149]: I0313 12:58:51.601779 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-console-config\") pod \"ff344520-bb09-4f16-82be-273378ab0663\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " Mar 13 12:58:51.601913 master-0 kubenswrapper[28149]: I0313 12:58:51.601855 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-service-ca\") pod \"ff344520-bb09-4f16-82be-273378ab0663\" (UID: \"ff344520-bb09-4f16-82be-273378ab0663\") " Mar 13 12:58:51.602354 master-0 kubenswrapper[28149]: I0313 12:58:51.602187 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ff344520-bb09-4f16-82be-273378ab0663" (UID: "ff344520-bb09-4f16-82be-273378ab0663"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.602661 master-0 kubenswrapper[28149]: I0313 12:58:51.602466 28149 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.602661 master-0 kubenswrapper[28149]: I0313 12:58:51.602496 28149 reconciler_common.go:293] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2f3bb9a1-578c-424d-8610-272c76bf0a31-alertmanager-main-db\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.602661 master-0 kubenswrapper[28149]: I0313 12:58:51.602547 28149 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2f3bb9a1-578c-424d-8610-272c76bf0a31-config-out\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.602661 master-0 kubenswrapper[28149]: I0313 12:58:51.602470 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-console-config" (OuterVolumeSpecName: "console-config") pod "ff344520-bb09-4f16-82be-273378ab0663" (UID: "ff344520-bb09-4f16-82be-273378ab0663"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.602661 master-0 kubenswrapper[28149]: I0313 12:58:51.602561 28149 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy-metric\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.602661 master-0 kubenswrapper[28149]: I0313 12:58:51.602606 28149 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-config-volume\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.602661 master-0 kubenswrapper[28149]: I0313 12:58:51.602625 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmxgv\" (UniqueName: \"kubernetes.io/projected/2f3bb9a1-578c-424d-8610-272c76bf0a31-kube-api-access-pmxgv\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.602661 master-0 kubenswrapper[28149]: I0313 12:58:51.602641 28149 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2f3bb9a1-578c-424d-8610-272c76bf0a31-tls-assets\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.602661 master-0 kubenswrapper[28149]: I0313 12:58:51.602657 28149 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-web-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.603033 master-0 kubenswrapper[28149]: I0313 12:58:51.602671 28149 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.603033 master-0 kubenswrapper[28149]: I0313 12:58:51.602689 28149 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2f3bb9a1-578c-424d-8610-272c76bf0a31-secret-alertmanager-main-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.603033 master-0 kubenswrapper[28149]: I0313 12:58:51.602705 28149 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.603033 master-0 kubenswrapper[28149]: I0313 12:58:51.602638 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ff344520-bb09-4f16-82be-273378ab0663" (UID: "ff344520-bb09-4f16-82be-273378ab0663"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.603367 master-0 kubenswrapper[28149]: I0313 12:58:51.603322 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-service-ca" (OuterVolumeSpecName: "service-ca") pod "ff344520-bb09-4f16-82be-273378ab0663" (UID: "ff344520-bb09-4f16-82be-273378ab0663"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:58:51.604640 master-0 kubenswrapper[28149]: I0313 12:58:51.604606 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff344520-bb09-4f16-82be-273378ab0663-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ff344520-bb09-4f16-82be-273378ab0663" (UID: "ff344520-bb09-4f16-82be-273378ab0663"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.604995 master-0 kubenswrapper[28149]: I0313 12:58:51.604967 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff344520-bb09-4f16-82be-273378ab0663-kube-api-access-xwxpz" (OuterVolumeSpecName: "kube-api-access-xwxpz") pod "ff344520-bb09-4f16-82be-273378ab0663" (UID: "ff344520-bb09-4f16-82be-273378ab0663"). InnerVolumeSpecName "kube-api-access-xwxpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:58:51.605645 master-0 kubenswrapper[28149]: I0313 12:58:51.605606 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff344520-bb09-4f16-82be-273378ab0663-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ff344520-bb09-4f16-82be-273378ab0663" (UID: "ff344520-bb09-4f16-82be-273378ab0663"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:58:51.611347 master-0 kubenswrapper[28149]: I0313 12:58:51.611317 28149 scope.go:117] "RemoveContainer" containerID="5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091" Mar 13 12:58:51.611717 master-0 kubenswrapper[28149]: E0313 12:58:51.611680 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091\": container with ID starting with 5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091 not found: ID does not exist" containerID="5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091" Mar 13 12:58:51.611768 master-0 kubenswrapper[28149]: I0313 12:58:51.611733 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091"} err="failed to get container status \"5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091\": rpc error: code = NotFound desc = could not find container \"5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091\": container with ID starting with 5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091 not found: ID does not exist" Mar 13 12:58:51.611823 master-0 kubenswrapper[28149]: I0313 12:58:51.611768 28149 scope.go:117] "RemoveContainer" containerID="7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b" Mar 13 12:58:51.612046 master-0 kubenswrapper[28149]: E0313 12:58:51.612016 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b\": container with ID starting with 7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b not found: ID does not exist" containerID="7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b" Mar 13 12:58:51.612100 master-0 kubenswrapper[28149]: I0313 12:58:51.612046 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b"} err="failed to get container status \"7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b\": rpc error: code = NotFound desc = could not find container \"7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b\": container with ID starting with 7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b not found: ID does not exist" Mar 13 12:58:51.612100 master-0 kubenswrapper[28149]: I0313 12:58:51.612068 28149 scope.go:117] "RemoveContainer" containerID="7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047" Mar 13 12:58:51.612358 master-0 kubenswrapper[28149]: E0313 12:58:51.612329 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047\": container with ID starting with 7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047 not found: ID does not exist" containerID="7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047" Mar 13 12:58:51.612358 master-0 kubenswrapper[28149]: I0313 12:58:51.612347 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047"} err="failed to get container status \"7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047\": rpc error: code = NotFound desc = could not find container \"7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047\": container with ID starting with 7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047 not found: ID does not exist" Mar 13 12:58:51.612465 master-0 kubenswrapper[28149]: I0313 12:58:51.612358 28149 scope.go:117] "RemoveContainer" containerID="5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3" Mar 13 12:58:51.612676 master-0 kubenswrapper[28149]: E0313 12:58:51.612655 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3\": container with ID starting with 5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3 not found: ID does not exist" containerID="5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3" Mar 13 12:58:51.612749 master-0 kubenswrapper[28149]: I0313 12:58:51.612671 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3"} err="failed to get container status \"5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3\": rpc error: code = NotFound desc = could not find container \"5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3\": container with ID starting with 5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3 not found: ID does not exist" Mar 13 12:58:51.612749 master-0 kubenswrapper[28149]: I0313 12:58:51.612721 28149 scope.go:117] "RemoveContainer" containerID="97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0" Mar 13 12:58:51.612964 master-0 kubenswrapper[28149]: E0313 12:58:51.612925 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0\": container with ID starting with 97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0 not found: ID does not exist" containerID="97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0" Mar 13 12:58:51.612964 master-0 kubenswrapper[28149]: I0313 12:58:51.612946 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0"} err="failed to get container status \"97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0\": rpc error: code = NotFound desc = could not find container \"97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0\": container with ID starting with 97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0 not found: ID does not exist" Mar 13 12:58:51.612964 master-0 kubenswrapper[28149]: I0313 12:58:51.612958 28149 scope.go:117] "RemoveContainer" containerID="f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259" Mar 13 12:58:51.613189 master-0 kubenswrapper[28149]: E0313 12:58:51.613167 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259\": container with ID starting with f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259 not found: ID does not exist" containerID="f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259" Mar 13 12:58:51.613256 master-0 kubenswrapper[28149]: I0313 12:58:51.613187 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259"} err="failed to get container status \"f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259\": rpc error: code = NotFound desc = could not find container \"f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259\": container with ID starting with f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259 not found: ID does not exist" Mar 13 12:58:51.613256 master-0 kubenswrapper[28149]: I0313 12:58:51.613199 28149 scope.go:117] "RemoveContainer" containerID="ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4" Mar 13 12:58:51.613418 master-0 kubenswrapper[28149]: E0313 12:58:51.613396 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4\": container with ID starting with ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4 not found: ID does not exist" containerID="ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4" Mar 13 12:58:51.613418 master-0 kubenswrapper[28149]: I0313 12:58:51.613415 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4"} err="failed to get container status \"ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4\": rpc error: code = NotFound desc = could not find container \"ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4\": container with ID starting with ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4 not found: ID does not exist" Mar 13 12:58:51.613512 master-0 kubenswrapper[28149]: I0313 12:58:51.613426 28149 scope.go:117] "RemoveContainer" containerID="5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091" Mar 13 12:58:51.613726 master-0 kubenswrapper[28149]: I0313 12:58:51.613703 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091"} err="failed to get container status \"5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091\": rpc error: code = NotFound desc = could not find container \"5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091\": container with ID starting with 5cc9e7441e740fff5f741be6daad5d138ffea5e2e64c5940499b9608e8f77091 not found: ID does not exist" Mar 13 12:58:51.613726 master-0 kubenswrapper[28149]: I0313 12:58:51.613719 28149 scope.go:117] "RemoveContainer" containerID="7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b" Mar 13 12:58:51.613958 master-0 kubenswrapper[28149]: I0313 12:58:51.613888 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b"} err="failed to get container status \"7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b\": rpc error: code = NotFound desc = could not find container \"7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b\": container with ID starting with 7dd10e13896d38c7749f73eefc059837470346b27a38f805670f771fbf9f3a5b not found: ID does not exist" Mar 13 12:58:51.613958 master-0 kubenswrapper[28149]: I0313 12:58:51.613955 28149 scope.go:117] "RemoveContainer" containerID="7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047" Mar 13 12:58:51.614254 master-0 kubenswrapper[28149]: I0313 12:58:51.614228 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047"} err="failed to get container status \"7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047\": rpc error: code = NotFound desc = could not find container \"7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047\": container with ID starting with 7fac4b13da0c9218a2d360a15d1b9a121be7df83e76ca77102bae92375944047 not found: ID does not exist" Mar 13 12:58:51.614254 master-0 kubenswrapper[28149]: I0313 12:58:51.614250 28149 scope.go:117] "RemoveContainer" containerID="5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3" Mar 13 12:58:51.614458 master-0 kubenswrapper[28149]: I0313 12:58:51.614416 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3"} err="failed to get container status \"5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3\": rpc error: code = NotFound desc = could not find container \"5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3\": container with ID starting with 5a811aa709e7e6cab65974d4b7d6cfc5df6270e7d4beaeb77dc07477461527b3 not found: ID does not exist" Mar 13 12:58:51.614458 master-0 kubenswrapper[28149]: I0313 12:58:51.614433 28149 scope.go:117] "RemoveContainer" containerID="97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0" Mar 13 12:58:51.614638 master-0 kubenswrapper[28149]: I0313 12:58:51.614607 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0"} err="failed to get container status \"97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0\": rpc error: code = NotFound desc = could not find container \"97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0\": container with ID starting with 97b78ee43c5c89a2d71b74ef2ecb91a65f6a6755b1c17459c639707d4fb210e0 not found: ID does not exist" Mar 13 12:58:51.614638 master-0 kubenswrapper[28149]: I0313 12:58:51.614623 28149 scope.go:117] "RemoveContainer" containerID="f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259" Mar 13 12:58:51.621262 master-0 kubenswrapper[28149]: I0313 12:58:51.614760 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259"} err="failed to get container status \"f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259\": rpc error: code = NotFound desc = could not find container \"f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259\": container with ID starting with f74af22c0ca8f8e819d7400ce092f1739fd376dfac5a59c6965272f819826259 not found: ID does not exist" Mar 13 12:58:51.621262 master-0 kubenswrapper[28149]: I0313 12:58:51.614775 28149 scope.go:117] "RemoveContainer" containerID="ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4" Mar 13 12:58:51.621365 master-0 kubenswrapper[28149]: I0313 12:58:51.621319 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4"} err="failed to get container status \"ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4\": rpc error: code = NotFound desc = could not find container \"ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4\": container with ID starting with ec8e449acecac0f53e0b800c3ddfa08c19e2c04ece00900fd4a2012da0acf2c4 not found: ID does not exist" Mar 13 12:58:51.621365 master-0 kubenswrapper[28149]: I0313 12:58:51.621344 28149 scope.go:117] "RemoveContainer" containerID="e76bf3fa14ab2ec6806e9549a1b7607ccea467b27926fd4c96d6982a827c0188" Mar 13 12:58:51.633617 master-0 kubenswrapper[28149]: I0313 12:58:51.633589 28149 scope.go:117] "RemoveContainer" containerID="b0611dc8723d0311cc21d4c09c05c710b9bc164a943da5cca522b147cfaa1608" Mar 13 12:58:51.647695 master-0 kubenswrapper[28149]: I0313 12:58:51.647659 28149 scope.go:117] "RemoveContainer" containerID="338937b0ebb757bdee738361c73af8d323aeef4fa0eb7edfc9e3a14cb3dcc3f8" Mar 13 12:58:51.659355 master-0 kubenswrapper[28149]: I0313 12:58:51.659293 28149 scope.go:117] "RemoveContainer" containerID="e08ebb9b72b3d839ad590a0420d611fa422a407a310320bdb128182aa8a60b33" Mar 13 12:58:51.704045 master-0 kubenswrapper[28149]: I0313 12:58:51.703910 28149 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.704045 master-0 kubenswrapper[28149]: I0313 12:58:51.703954 28149 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.704045 master-0 kubenswrapper[28149]: I0313 12:58:51.703969 28149 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff344520-bb09-4f16-82be-273378ab0663-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.704045 master-0 kubenswrapper[28149]: I0313 12:58:51.703982 28149 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ff344520-bb09-4f16-82be-273378ab0663-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.704045 master-0 kubenswrapper[28149]: I0313 12:58:51.703998 28149 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff344520-bb09-4f16-82be-273378ab0663-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:51.704045 master-0 kubenswrapper[28149]: I0313 12:58:51.704011 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwxpz\" (UniqueName: \"kubernetes.io/projected/ff344520-bb09-4f16-82be-273378ab0663-kube-api-access-xwxpz\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:52.484184 master-0 kubenswrapper[28149]: I0313 12:58:52.483996 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:58:52.495665 master-0 kubenswrapper[28149]: I0313 12:58:52.495614 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-59dc574b9f-z4gvv_ff344520-bb09-4f16-82be-273378ab0663/console/0.log" Mar 13 12:58:52.497305 master-0 kubenswrapper[28149]: I0313 12:58:52.497272 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59dc574b9f-z4gvv" Mar 13 12:58:52.497572 master-0 kubenswrapper[28149]: I0313 12:58:52.497488 28149 patch_prober.go:28] interesting pod/downloads-84f57b9877-x5rrn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" start-of-body= Mar 13 12:58:52.497720 master-0 kubenswrapper[28149]: I0313 12:58:52.497643 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-x5rrn" podUID="08bedcd0-df1f-4f21-9cf5-9481959dd4fb" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" Mar 13 12:58:52.497840 master-0 kubenswrapper[28149]: I0313 12:58:52.497794 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59dc574b9f-z4gvv" event={"ID":"ff344520-bb09-4f16-82be-273378ab0663","Type":"ContainerDied","Data":"e4cedbaf997be2d29739b8b50897aba2f0cbdfec64d433a6178ddfcc5a309295"} Mar 13 12:58:52.497916 master-0 kubenswrapper[28149]: I0313 12:58:52.497892 28149 scope.go:117] "RemoveContainer" containerID="733ed7fb689426e60593f6b467db5361babda5ddfa6ef2c0e3d7cdd1f7a25f7f" Mar 13 12:58:52.603789 master-0 kubenswrapper[28149]: I0313 12:58:52.603698 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-bc56f484d-2sbtm"] Mar 13 12:58:53.551651 master-0 kubenswrapper[28149]: I0313 12:58:53.551483 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bc56f484d-2sbtm" event={"ID":"14305d97-7b24-4321-bf3a-3ec79e52f6ea","Type":"ContainerStarted","Data":"c15743de492ca8ddc6fa3ae187515d8740ee624870648540fd9ae0e9eebfea32"} Mar 13 12:58:53.551651 master-0 kubenswrapper[28149]: I0313 12:58:53.551517 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bc56f484d-2sbtm" event={"ID":"14305d97-7b24-4321-bf3a-3ec79e52f6ea","Type":"ContainerStarted","Data":"d8b671961d0f43b9ff143b56b51b8894a873e7f964dcf6259604ae93bbc70a93"} Mar 13 12:58:53.551651 master-0 kubenswrapper[28149]: I0313 12:58:53.551561 28149 patch_prober.go:28] interesting pod/downloads-84f57b9877-x5rrn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" start-of-body= Mar 13 12:58:53.551651 master-0 kubenswrapper[28149]: I0313 12:58:53.551603 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-x5rrn" podUID="08bedcd0-df1f-4f21-9cf5-9481959dd4fb" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" Mar 13 12:58:53.804499 master-0 kubenswrapper[28149]: I0313 12:58:53.804307 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-mpvnf"] Mar 13 12:58:53.804939 master-0 kubenswrapper[28149]: I0313 12:58:53.804888 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-575b4c697b-kjnzx"] Mar 13 12:58:54.560522 master-0 kubenswrapper[28149]: I0313 12:58:54.560464 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-mpvnf" event={"ID":"cddccab4-91b6-4bcf-a5d3-fd8014dffda6","Type":"ContainerStarted","Data":"942cb5eccf6d3a41f9692e3b42d190fcec28a54d750d28bd0b3ce57573065d96"} Mar 13 12:58:54.562117 master-0 kubenswrapper[28149]: I0313 12:58:54.562082 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-575b4c697b-kjnzx" event={"ID":"3c426507-418f-4258-bef6-4206640beb3d","Type":"ContainerStarted","Data":"62a39b62dd321a9a78aa93cc0dbace3d5275bb08e7d86c7913fc8df6b17cff3f"} Mar 13 12:58:54.562224 master-0 kubenswrapper[28149]: I0313 12:58:54.562129 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-575b4c697b-kjnzx" event={"ID":"3c426507-418f-4258-bef6-4206640beb3d","Type":"ContainerStarted","Data":"39d9010d8cd763ece37db8db6ba3978604453d407277d0be25bc7d9eee9120d5"} Mar 13 12:58:54.637788 master-0 kubenswrapper[28149]: I0313 12:58:54.637741 28149 patch_prober.go:28] interesting pod/downloads-84f57b9877-x5rrn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" start-of-body= Mar 13 12:58:54.638019 master-0 kubenswrapper[28149]: I0313 12:58:54.637803 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-x5rrn" podUID="08bedcd0-df1f-4f21-9cf5-9481959dd4fb" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" Mar 13 12:58:54.638311 master-0 kubenswrapper[28149]: I0313 12:58:54.638165 28149 patch_prober.go:28] interesting pod/downloads-84f57b9877-x5rrn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" start-of-body= Mar 13 12:58:54.638311 master-0 kubenswrapper[28149]: I0313 12:58:54.638237 28149 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-84f57b9877-x5rrn" podUID="08bedcd0-df1f-4f21-9cf5-9481959dd4fb" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" Mar 13 12:58:54.907999 master-0 kubenswrapper[28149]: I0313 12:58:54.907950 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b77fb6479-77kfk"] Mar 13 12:58:54.908336 master-0 kubenswrapper[28149]: E0313 12:58:54.908314 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="kube-rbac-proxy-web" Mar 13 12:58:54.908432 master-0 kubenswrapper[28149]: I0313 12:58:54.908340 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="kube-rbac-proxy-web" Mar 13 12:58:54.908432 master-0 kubenswrapper[28149]: E0313 12:58:54.908373 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a454234a-6c8e-4916-81e8-c9e66cec9d31" containerName="controller-manager" Mar 13 12:58:54.908432 master-0 kubenswrapper[28149]: I0313 12:58:54.908383 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="a454234a-6c8e-4916-81e8-c9e66cec9d31" containerName="controller-manager" Mar 13 12:58:54.908432 master-0 kubenswrapper[28149]: E0313 12:58:54.908394 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="config-reloader" Mar 13 12:58:54.908432 master-0 kubenswrapper[28149]: I0313 12:58:54.908410 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="config-reloader" Mar 13 12:58:54.908432 master-0 kubenswrapper[28149]: E0313 12:58:54.908421 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff344520-bb09-4f16-82be-273378ab0663" containerName="console" Mar 13 12:58:54.908432 master-0 kubenswrapper[28149]: I0313 12:58:54.908428 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff344520-bb09-4f16-82be-273378ab0663" containerName="console" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: E0313 12:58:54.908454 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28729487-1d7c-4837-961a-6cb084bf543f" containerName="oauth-openshift" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: I0313 12:58:54.908462 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="28729487-1d7c-4837-961a-6cb084bf543f" containerName="oauth-openshift" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: E0313 12:58:54.908472 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="alertmanager" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: I0313 12:58:54.908482 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="alertmanager" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: E0313 12:58:54.908497 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="prom-label-proxy" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: I0313 12:58:54.908506 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="prom-label-proxy" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: E0313 12:58:54.908525 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ffa620-dacc-4b09-be04-2c325f860813" containerName="route-controller-manager" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: I0313 12:58:54.908533 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ffa620-dacc-4b09-be04-2c325f860813" containerName="route-controller-manager" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: E0313 12:58:54.908543 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="kube-rbac-proxy-metric" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: I0313 12:58:54.908552 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="kube-rbac-proxy-metric" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: E0313 12:58:54.908573 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="kube-rbac-proxy" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: I0313 12:58:54.908582 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="kube-rbac-proxy" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: E0313 12:58:54.908594 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="init-config-reloader" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: I0313 12:58:54.908602 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="init-config-reloader" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: E0313 12:58:54.908634 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ffa620-dacc-4b09-be04-2c325f860813" containerName="route-controller-manager" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: I0313 12:58:54.908644 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ffa620-dacc-4b09-be04-2c325f860813" containerName="route-controller-manager" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: E0313 12:58:54.908656 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49abaf10-6497-4c58-8a80-1a598caa2999" containerName="console" Mar 13 12:58:54.908705 master-0 kubenswrapper[28149]: I0313 12:58:54.908664 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="49abaf10-6497-4c58-8a80-1a598caa2999" containerName="console" Mar 13 12:58:54.909491 master-0 kubenswrapper[28149]: I0313 12:58:54.908838 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="18ffa620-dacc-4b09-be04-2c325f860813" containerName="route-controller-manager" Mar 13 12:58:54.909491 master-0 kubenswrapper[28149]: I0313 12:58:54.908856 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff344520-bb09-4f16-82be-273378ab0663" containerName="console" Mar 13 12:58:54.909491 master-0 kubenswrapper[28149]: I0313 12:58:54.908879 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="18ffa620-dacc-4b09-be04-2c325f860813" containerName="route-controller-manager" Mar 13 12:58:54.909491 master-0 kubenswrapper[28149]: I0313 12:58:54.908901 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="kube-rbac-proxy" Mar 13 12:58:54.909491 master-0 kubenswrapper[28149]: I0313 12:58:54.908917 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="config-reloader" Mar 13 12:58:54.909491 master-0 kubenswrapper[28149]: I0313 12:58:54.908936 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="a454234a-6c8e-4916-81e8-c9e66cec9d31" containerName="controller-manager" Mar 13 12:58:54.909491 master-0 kubenswrapper[28149]: I0313 12:58:54.908949 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="a454234a-6c8e-4916-81e8-c9e66cec9d31" containerName="controller-manager" Mar 13 12:58:54.909491 master-0 kubenswrapper[28149]: I0313 12:58:54.908961 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="alertmanager" Mar 13 12:58:54.909491 master-0 kubenswrapper[28149]: I0313 12:58:54.908976 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="kube-rbac-proxy-web" Mar 13 12:58:54.909491 master-0 kubenswrapper[28149]: I0313 12:58:54.908987 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="28729487-1d7c-4837-961a-6cb084bf543f" containerName="oauth-openshift" Mar 13 12:58:54.909491 master-0 kubenswrapper[28149]: I0313 12:58:54.909000 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="prom-label-proxy" Mar 13 12:58:54.909491 master-0 kubenswrapper[28149]: I0313 12:58:54.909013 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" containerName="kube-rbac-proxy-metric" Mar 13 12:58:54.909491 master-0 kubenswrapper[28149]: I0313 12:58:54.909026 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="49abaf10-6497-4c58-8a80-1a598caa2999" containerName="console" Mar 13 12:58:54.910000 master-0 kubenswrapper[28149]: I0313 12:58:54.909621 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:54.920170 master-0 kubenswrapper[28149]: I0313 12:58:54.911731 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7bcd698648-pfnwr"] Mar 13 12:58:54.920170 master-0 kubenswrapper[28149]: E0313 12:58:54.912063 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a454234a-6c8e-4916-81e8-c9e66cec9d31" containerName="controller-manager" Mar 13 12:58:54.920170 master-0 kubenswrapper[28149]: I0313 12:58:54.912078 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="a454234a-6c8e-4916-81e8-c9e66cec9d31" containerName="controller-manager" Mar 13 12:58:54.920170 master-0 kubenswrapper[28149]: I0313 12:58:54.912690 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:54.920170 master-0 kubenswrapper[28149]: I0313 12:58:54.918057 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:58:54.924685 master-0 kubenswrapper[28149]: I0313 12:58:54.924641 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-86477f577f-glgzr"] Mar 13 12:58:54.935169 master-0 kubenswrapper[28149]: I0313 12:58:54.935084 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:58:54.940484 master-0 kubenswrapper[28149]: I0313 12:58:54.935576 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-grrfm" Mar 13 12:58:54.940863 master-0 kubenswrapper[28149]: I0313 12:58:54.935633 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 13 12:58:54.947273 master-0 kubenswrapper[28149]: I0313 12:58:54.935757 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 13 12:58:54.947273 master-0 kubenswrapper[28149]: I0313 12:58:54.935816 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 13 12:58:54.947273 master-0 kubenswrapper[28149]: I0313 12:58:54.935948 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:58:54.947273 master-0 kubenswrapper[28149]: I0313 12:58:54.935996 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:58:54.947273 master-0 kubenswrapper[28149]: I0313 12:58:54.936089 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:58:54.947273 master-0 kubenswrapper[28149]: I0313 12:58:54.936229 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 13 12:58:54.947273 master-0 kubenswrapper[28149]: I0313 12:58:54.936367 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 13 12:58:54.947273 master-0 kubenswrapper[28149]: I0313 12:58:54.936455 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 13 12:58:54.947273 master-0 kubenswrapper[28149]: I0313 12:58:54.936515 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 13 12:58:54.947273 master-0 kubenswrapper[28149]: I0313 12:58:54.936605 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-z42c9" Mar 13 12:58:54.947273 master-0 kubenswrapper[28149]: I0313 12:58:54.936696 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 13 12:58:54.947273 master-0 kubenswrapper[28149]: I0313 12:58:54.936974 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 13 12:58:54.947273 master-0 kubenswrapper[28149]: I0313 12:58:54.938185 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 13 12:58:55.026977 master-0 kubenswrapper[28149]: I0313 12:58:55.026929 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 13 12:58:55.038953 master-0 kubenswrapper[28149]: I0313 12:58:55.038903 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 13 12:58:55.042669 master-0 kubenswrapper[28149]: I0313 12:58:55.042640 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 13 12:58:55.048739 master-0 kubenswrapper[28149]: I0313 12:58:55.048715 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:58:55.130010 master-0 kubenswrapper[28149]: I0313 12:58:55.129932 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-user-template-error\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.130292 master-0 kubenswrapper[28149]: I0313 12:58:55.130269 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cc3e582-adba-4520-aa5a-c999333ae0fc-audit-policies\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.130445 master-0 kubenswrapper[28149]: I0313 12:58:55.130430 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cc3e582-adba-4520-aa5a-c999333ae0fc-audit-dir\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.130542 master-0 kubenswrapper[28149]: I0313 12:58:55.130525 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgs5d\" (UniqueName: \"kubernetes.io/projected/774a19fc-e49b-48fa-ada0-294ef034d30a-kube-api-access-rgs5d\") pod \"controller-manager-6b77fb6479-77kfk\" (UID: \"774a19fc-e49b-48fa-ada0-294ef034d30a\") " pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:55.130629 master-0 kubenswrapper[28149]: I0313 12:58:55.130614 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/774a19fc-e49b-48fa-ada0-294ef034d30a-proxy-ca-bundles\") pod \"controller-manager-6b77fb6479-77kfk\" (UID: \"774a19fc-e49b-48fa-ada0-294ef034d30a\") " pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:55.130696 master-0 kubenswrapper[28149]: I0313 12:58:55.130685 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.130779 master-0 kubenswrapper[28149]: I0313 12:58:55.130762 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.130853 master-0 kubenswrapper[28149]: I0313 12:58:55.130841 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.130938 master-0 kubenswrapper[28149]: I0313 12:58:55.130924 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp6kx\" (UniqueName: \"kubernetes.io/projected/0cc3e582-adba-4520-aa5a-c999333ae0fc-kube-api-access-fp6kx\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.131200 master-0 kubenswrapper[28149]: I0313 12:58:55.131183 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-user-template-login\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.131400 master-0 kubenswrapper[28149]: I0313 12:58:55.131383 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.131557 master-0 kubenswrapper[28149]: I0313 12:58:55.131534 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/774a19fc-e49b-48fa-ada0-294ef034d30a-serving-cert\") pod \"controller-manager-6b77fb6479-77kfk\" (UID: \"774a19fc-e49b-48fa-ada0-294ef034d30a\") " pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:55.131703 master-0 kubenswrapper[28149]: I0313 12:58:55.131685 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/774a19fc-e49b-48fa-ada0-294ef034d30a-client-ca\") pod \"controller-manager-6b77fb6479-77kfk\" (UID: \"774a19fc-e49b-48fa-ada0-294ef034d30a\") " pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:55.131847 master-0 kubenswrapper[28149]: I0313 12:58:55.131834 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-session\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.131953 master-0 kubenswrapper[28149]: I0313 12:58:55.131938 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774a19fc-e49b-48fa-ada0-294ef034d30a-config\") pod \"controller-manager-6b77fb6479-77kfk\" (UID: \"774a19fc-e49b-48fa-ada0-294ef034d30a\") " pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:55.132031 master-0 kubenswrapper[28149]: I0313 12:58:55.132018 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.132125 master-0 kubenswrapper[28149]: I0313 12:58:55.132109 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.132288 master-0 kubenswrapper[28149]: I0313 12:58:55.132268 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.233801 master-0 kubenswrapper[28149]: I0313 12:58:55.233603 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.233801 master-0 kubenswrapper[28149]: I0313 12:58:55.233667 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-user-template-error\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.233801 master-0 kubenswrapper[28149]: I0313 12:58:55.233691 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cc3e582-adba-4520-aa5a-c999333ae0fc-audit-policies\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.233801 master-0 kubenswrapper[28149]: I0313 12:58:55.233716 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cc3e582-adba-4520-aa5a-c999333ae0fc-audit-dir\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.233801 master-0 kubenswrapper[28149]: I0313 12:58:55.233773 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgs5d\" (UniqueName: \"kubernetes.io/projected/774a19fc-e49b-48fa-ada0-294ef034d30a-kube-api-access-rgs5d\") pod \"controller-manager-6b77fb6479-77kfk\" (UID: \"774a19fc-e49b-48fa-ada0-294ef034d30a\") " pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:55.233801 master-0 kubenswrapper[28149]: I0313 12:58:55.233800 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/774a19fc-e49b-48fa-ada0-294ef034d30a-proxy-ca-bundles\") pod \"controller-manager-6b77fb6479-77kfk\" (UID: \"774a19fc-e49b-48fa-ada0-294ef034d30a\") " pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:55.234191 master-0 kubenswrapper[28149]: I0313 12:58:55.233822 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.234191 master-0 kubenswrapper[28149]: I0313 12:58:55.233842 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.234191 master-0 kubenswrapper[28149]: I0313 12:58:55.233863 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.234191 master-0 kubenswrapper[28149]: I0313 12:58:55.233903 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp6kx\" (UniqueName: \"kubernetes.io/projected/0cc3e582-adba-4520-aa5a-c999333ae0fc-kube-api-access-fp6kx\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.234191 master-0 kubenswrapper[28149]: I0313 12:58:55.233932 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-user-template-login\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.234191 master-0 kubenswrapper[28149]: I0313 12:58:55.233961 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.234191 master-0 kubenswrapper[28149]: I0313 12:58:55.233996 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/774a19fc-e49b-48fa-ada0-294ef034d30a-serving-cert\") pod \"controller-manager-6b77fb6479-77kfk\" (UID: \"774a19fc-e49b-48fa-ada0-294ef034d30a\") " pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:55.234191 master-0 kubenswrapper[28149]: I0313 12:58:55.234020 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/774a19fc-e49b-48fa-ada0-294ef034d30a-client-ca\") pod \"controller-manager-6b77fb6479-77kfk\" (UID: \"774a19fc-e49b-48fa-ada0-294ef034d30a\") " pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:55.234191 master-0 kubenswrapper[28149]: I0313 12:58:55.234042 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-session\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.234191 master-0 kubenswrapper[28149]: I0313 12:58:55.234073 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774a19fc-e49b-48fa-ada0-294ef034d30a-config\") pod \"controller-manager-6b77fb6479-77kfk\" (UID: \"774a19fc-e49b-48fa-ada0-294ef034d30a\") " pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:55.234191 master-0 kubenswrapper[28149]: I0313 12:58:55.234088 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.234191 master-0 kubenswrapper[28149]: I0313 12:58:55.234116 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.236319 master-0 kubenswrapper[28149]: I0313 12:58:55.236288 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/774a19fc-e49b-48fa-ada0-294ef034d30a-client-ca\") pod \"controller-manager-6b77fb6479-77kfk\" (UID: \"774a19fc-e49b-48fa-ada0-294ef034d30a\") " pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:55.237304 master-0 kubenswrapper[28149]: I0313 12:58:55.237277 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.239628 master-0 kubenswrapper[28149]: I0313 12:58:55.239579 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774a19fc-e49b-48fa-ada0-294ef034d30a-config\") pod \"controller-manager-6b77fb6479-77kfk\" (UID: \"774a19fc-e49b-48fa-ada0-294ef034d30a\") " pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:55.240075 master-0 kubenswrapper[28149]: I0313 12:58:55.240047 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.240229 master-0 kubenswrapper[28149]: I0313 12:58:55.240103 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cc3e582-adba-4520-aa5a-c999333ae0fc-audit-dir\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.240692 master-0 kubenswrapper[28149]: I0313 12:58:55.240668 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cc3e582-adba-4520-aa5a-c999333ae0fc-audit-policies\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.241203 master-0 kubenswrapper[28149]: I0313 12:58:55.241174 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.267538 master-0 kubenswrapper[28149]: I0313 12:58:55.265769 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.268460 master-0 kubenswrapper[28149]: I0313 12:58:55.268390 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-user-template-login\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.268554 master-0 kubenswrapper[28149]: I0313 12:58:55.268486 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.268990 master-0 kubenswrapper[28149]: I0313 12:58:55.268676 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/774a19fc-e49b-48fa-ada0-294ef034d30a-proxy-ca-bundles\") pod \"controller-manager-6b77fb6479-77kfk\" (UID: \"774a19fc-e49b-48fa-ada0-294ef034d30a\") " pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:55.268990 master-0 kubenswrapper[28149]: I0313 12:58:55.268810 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/774a19fc-e49b-48fa-ada0-294ef034d30a-serving-cert\") pod \"controller-manager-6b77fb6479-77kfk\" (UID: \"774a19fc-e49b-48fa-ada0-294ef034d30a\") " pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:55.268990 master-0 kubenswrapper[28149]: I0313 12:58:55.268953 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-session\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.268990 master-0 kubenswrapper[28149]: I0313 12:58:55.268978 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-user-template-error\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.270833 master-0 kubenswrapper[28149]: I0313 12:58:55.270788 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.272366 master-0 kubenswrapper[28149]: I0313 12:58:55.272313 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0cc3e582-adba-4520-aa5a-c999333ae0fc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:55.831702 master-0 kubenswrapper[28149]: I0313 12:58:55.830322 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b77fb6479-77kfk"] Mar 13 12:58:55.841238 master-0 kubenswrapper[28149]: I0313 12:58:55.840428 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7bcd698648-pfnwr"] Mar 13 12:58:56.639795 master-0 kubenswrapper[28149]: I0313 12:58:56.639716 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-86477f577f-glgzr"] Mar 13 12:58:56.701348 master-0 kubenswrapper[28149]: I0313 12:58:56.701272 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28729487-1d7c-4837-961a-6cb084bf543f" path="/var/lib/kubelet/pods/28729487-1d7c-4837-961a-6cb084bf543f/volumes" Mar 13 12:58:57.562051 master-0 kubenswrapper[28149]: I0313 12:58:57.561976 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp6kx\" (UniqueName: \"kubernetes.io/projected/0cc3e582-adba-4520-aa5a-c999333ae0fc-kube-api-access-fp6kx\") pod \"oauth-openshift-7bcd698648-pfnwr\" (UID: \"0cc3e582-adba-4520-aa5a-c999333ae0fc\") " pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:57.569086 master-0 kubenswrapper[28149]: I0313 12:58:57.569000 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgs5d\" (UniqueName: \"kubernetes.io/projected/774a19fc-e49b-48fa-ada0-294ef034d30a-kube-api-access-rgs5d\") pod \"controller-manager-6b77fb6479-77kfk\" (UID: \"774a19fc-e49b-48fa-ada0-294ef034d30a\") " pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:57.774163 master-0 kubenswrapper[28149]: I0313 12:58:57.762384 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:58:57.999912 master-0 kubenswrapper[28149]: I0313 12:58:57.999704 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:58:59.081222 master-0 kubenswrapper[28149]: I0313 12:58:59.081123 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b845478f4-2dqdf"] Mar 13 12:58:59.587868 master-0 kubenswrapper[28149]: I0313 12:58:59.587745 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:58:59.744440 master-0 kubenswrapper[28149]: I0313 12:58:59.744353 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5b845478f4-2dqdf"] Mar 13 12:59:00.832380 master-0 kubenswrapper[28149]: I0313 12:59:00.832312 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49abaf10-6497-4c58-8a80-1a598caa2999" path="/var/lib/kubelet/pods/49abaf10-6497-4c58-8a80-1a598caa2999/volumes" Mar 13 12:59:01.529635 master-0 kubenswrapper[28149]: I0313 12:59:01.529519 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7bcd698648-pfnwr"] Mar 13 12:59:01.537518 master-0 kubenswrapper[28149]: I0313 12:59:01.537464 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b77fb6479-77kfk"] Mar 13 12:59:02.774321 master-0 kubenswrapper[28149]: I0313 12:59:02.774230 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-84f57b9877-x5rrn" podStartSLOduration=13.227097784 podStartE2EDuration="1m18.774186492s" podCreationTimestamp="2026-03-13 12:57:44 +0000 UTC" firstStartedPulling="2026-03-13 12:57:45.227077756 +0000 UTC m=+238.880542915" lastFinishedPulling="2026-03-13 12:58:50.774166464 +0000 UTC m=+304.427631623" observedRunningTime="2026-03-13 12:59:02.773053431 +0000 UTC m=+316.426518600" watchObservedRunningTime="2026-03-13 12:59:02.774186492 +0000 UTC m=+316.427651691" Mar 13 12:59:02.899419 master-0 kubenswrapper[28149]: I0313 12:59:02.896671 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw"] Mar 13 12:59:02.913412 master-0 kubenswrapper[28149]: I0313 12:59:02.911441 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c48d4f7d-k7drw"] Mar 13 12:59:02.928111 master-0 kubenswrapper[28149]: I0313 12:59:02.928055 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh"] Mar 13 12:59:02.941033 master-0 kubenswrapper[28149]: I0313 12:59:02.940954 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54c79cbfcc-cxhmh"] Mar 13 12:59:02.941236 master-0 kubenswrapper[28149]: I0313 12:59:02.941173 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-bc56f484d-2sbtm" podStartSLOduration=23.941147121 podStartE2EDuration="23.941147121s" podCreationTimestamp="2026-03-13 12:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:59:02.925711779 +0000 UTC m=+316.579176938" watchObservedRunningTime="2026-03-13 12:59:02.941147121 +0000 UTC m=+316.594612280" Mar 13 12:59:02.990213 master-0 kubenswrapper[28149]: I0313 12:59:02.987344 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:59:03.011282 master-0 kubenswrapper[28149]: I0313 12:59:03.009532 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:59:03.012296 master-0 kubenswrapper[28149]: I0313 12:59:03.009712 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-575b4c697b-kjnzx" podStartSLOduration=19.009689532 podStartE2EDuration="19.009689532s" podCreationTimestamp="2026-03-13 12:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:59:02.977136163 +0000 UTC m=+316.630601322" watchObservedRunningTime="2026-03-13 12:59:03.009689532 +0000 UTC m=+316.663154691" Mar 13 12:59:03.040216 master-0 kubenswrapper[28149]: I0313 12:59:03.037548 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:59:03.040216 master-0 kubenswrapper[28149]: I0313 12:59:03.037929 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="prometheus" containerID="cri-o://979968ca203da3fdb2f2540df77bd6489021a783195cb39241718daa02ab6141" gracePeriod=600 Mar 13 12:59:03.040216 master-0 kubenswrapper[28149]: I0313 12:59:03.038071 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="kube-rbac-proxy-thanos" containerID="cri-o://11577e2fa27649ff57c812aa1e5f415f0df82cf38672121d4fd2104f2ddb2c74" gracePeriod=600 Mar 13 12:59:03.040216 master-0 kubenswrapper[28149]: I0313 12:59:03.038117 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="kube-rbac-proxy" containerID="cri-o://f4d159b63ea08ed149a2147a2b0c14196fd73509b5490955c8649e50998058a8" gracePeriod=600 Mar 13 12:59:03.040216 master-0 kubenswrapper[28149]: I0313 12:59:03.038178 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="kube-rbac-proxy-web" containerID="cri-o://8c257806ee764144b97286097976faa619d91e8aaf5fd5e5f387ea61f8a61b3d" gracePeriod=600 Mar 13 12:59:03.040216 master-0 kubenswrapper[28149]: I0313 12:59:03.038214 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="thanos-sidecar" containerID="cri-o://45c005ee02d97cdd3fe7139faccbd052b06c1eec52072769631e675eb071ba1c" gracePeriod=600 Mar 13 12:59:03.040216 master-0 kubenswrapper[28149]: I0313 12:59:03.038249 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="config-reloader" containerID="cri-o://4a5a6846548f318a60129c0ddbfaffa9fe75d590cd409fdca3fd562ed975c4a4" gracePeriod=600 Mar 13 12:59:03.043207 master-0 kubenswrapper[28149]: I0313 12:59:03.041899 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-59dc574b9f-z4gvv"] Mar 13 12:59:03.043207 master-0 kubenswrapper[28149]: I0313 12:59:03.042123 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" event={"ID":"0cc3e582-adba-4520-aa5a-c999333ae0fc","Type":"ContainerStarted","Data":"bf32d09ef9179774f92d417e78ecca06c9a30be60ce1f0738d6f0dd7eb19cb67"} Mar 13 12:59:03.061417 master-0 kubenswrapper[28149]: I0313 12:59:03.060230 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-59dc574b9f-z4gvv"] Mar 13 12:59:03.065521 master-0 kubenswrapper[28149]: I0313 12:59:03.064842 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:59:03.069076 master-0 kubenswrapper[28149]: I0313 12:59:03.068574 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.077885 master-0 kubenswrapper[28149]: I0313 12:59:03.073270 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" event={"ID":"774a19fc-e49b-48fa-ada0-294ef034d30a","Type":"ContainerStarted","Data":"1267d15b21e450ee7ba78f2164ad031c530eba1775bc073536da17fe79ea8e8d"} Mar 13 12:59:03.077885 master-0 kubenswrapper[28149]: I0313 12:59:03.073565 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 13 12:59:03.077885 master-0 kubenswrapper[28149]: I0313 12:59:03.073844 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 13 12:59:03.077885 master-0 kubenswrapper[28149]: I0313 12:59:03.074114 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 13 12:59:03.077885 master-0 kubenswrapper[28149]: I0313 12:59:03.074218 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 13 12:59:03.077885 master-0 kubenswrapper[28149]: I0313 12:59:03.074381 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-rfzgn" Mar 13 12:59:03.077885 master-0 kubenswrapper[28149]: I0313 12:59:03.074832 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 13 12:59:03.077885 master-0 kubenswrapper[28149]: I0313 12:59:03.075176 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 13 12:59:03.077885 master-0 kubenswrapper[28149]: I0313 12:59:03.075347 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 13 12:59:03.394863 master-0 kubenswrapper[28149]: I0313 12:59:03.394427 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 13 12:59:03.675575 master-0 kubenswrapper[28149]: I0313 12:59:03.488940 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/51320e4b-751c-4bd4-8a4d-765f43db1d4e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.675575 master-0 kubenswrapper[28149]: I0313 12:59:03.489028 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-config-volume\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.675575 master-0 kubenswrapper[28149]: I0313 12:59:03.489155 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/51320e4b-751c-4bd4-8a4d-765f43db1d4e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.675575 master-0 kubenswrapper[28149]: I0313 12:59:03.489222 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51320e4b-751c-4bd4-8a4d-765f43db1d4e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.675575 master-0 kubenswrapper[28149]: I0313 12:59:03.489305 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-web-config\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.675575 master-0 kubenswrapper[28149]: I0313 12:59:03.489365 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/51320e4b-751c-4bd4-8a4d-765f43db1d4e-config-out\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.675575 master-0 kubenswrapper[28149]: I0313 12:59:03.489412 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf82z\" (UniqueName: \"kubernetes.io/projected/51320e4b-751c-4bd4-8a4d-765f43db1d4e-kube-api-access-wf82z\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.675575 master-0 kubenswrapper[28149]: I0313 12:59:03.489458 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.675575 master-0 kubenswrapper[28149]: I0313 12:59:03.489513 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.675575 master-0 kubenswrapper[28149]: I0313 12:59:03.489568 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.675575 master-0 kubenswrapper[28149]: I0313 12:59:03.489629 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.675575 master-0 kubenswrapper[28149]: I0313 12:59:03.489676 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/51320e4b-751c-4bd4-8a4d-765f43db1d4e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.679039 master-0 kubenswrapper[28149]: I0313 12:59:03.668703 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:59:03.766178 master-0 kubenswrapper[28149]: I0313 12:59:03.766116 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.766178 master-0 kubenswrapper[28149]: I0313 12:59:03.766173 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.766486 master-0 kubenswrapper[28149]: I0313 12:59:03.766198 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/51320e4b-751c-4bd4-8a4d-765f43db1d4e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.766486 master-0 kubenswrapper[28149]: I0313 12:59:03.766244 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/51320e4b-751c-4bd4-8a4d-765f43db1d4e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.766486 master-0 kubenswrapper[28149]: I0313 12:59:03.766266 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-config-volume\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.766486 master-0 kubenswrapper[28149]: I0313 12:59:03.766301 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/51320e4b-751c-4bd4-8a4d-765f43db1d4e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.766486 master-0 kubenswrapper[28149]: I0313 12:59:03.766326 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51320e4b-751c-4bd4-8a4d-765f43db1d4e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.766486 master-0 kubenswrapper[28149]: I0313 12:59:03.766357 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-web-config\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.767072 master-0 kubenswrapper[28149]: I0313 12:59:03.766807 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/51320e4b-751c-4bd4-8a4d-765f43db1d4e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.769507 master-0 kubenswrapper[28149]: I0313 12:59:03.769459 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/51320e4b-751c-4bd4-8a4d-765f43db1d4e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.769623 master-0 kubenswrapper[28149]: I0313 12:59:03.769523 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf82z\" (UniqueName: \"kubernetes.io/projected/51320e4b-751c-4bd4-8a4d-765f43db1d4e-kube-api-access-wf82z\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.769623 master-0 kubenswrapper[28149]: I0313 12:59:03.769546 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/51320e4b-751c-4bd4-8a4d-765f43db1d4e-config-out\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.769623 master-0 kubenswrapper[28149]: I0313 12:59:03.769574 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.769623 master-0 kubenswrapper[28149]: I0313 12:59:03.769603 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.769955 master-0 kubenswrapper[28149]: I0313 12:59:03.769677 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51320e4b-751c-4bd4-8a4d-765f43db1d4e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.770041 master-0 kubenswrapper[28149]: I0313 12:59:03.769958 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/51320e4b-751c-4bd4-8a4d-765f43db1d4e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.770735 master-0 kubenswrapper[28149]: I0313 12:59:03.770685 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-web-config\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.773970 master-0 kubenswrapper[28149]: I0313 12:59:03.772553 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-config-volume\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.773970 master-0 kubenswrapper[28149]: I0313 12:59:03.772797 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.780726 master-0 kubenswrapper[28149]: I0313 12:59:03.780693 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/51320e4b-751c-4bd4-8a4d-765f43db1d4e-config-out\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.780945 master-0 kubenswrapper[28149]: I0313 12:59:03.780905 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.781129 master-0 kubenswrapper[28149]: I0313 12:59:03.781094 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.783844 master-0 kubenswrapper[28149]: I0313 12:59:03.783823 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/51320e4b-751c-4bd4-8a4d-765f43db1d4e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:03.795211 master-0 kubenswrapper[28149]: I0313 12:59:03.792957 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf82z\" (UniqueName: \"kubernetes.io/projected/51320e4b-751c-4bd4-8a4d-765f43db1d4e-kube-api-access-wf82z\") pod \"alertmanager-main-0\" (UID: \"51320e4b-751c-4bd4-8a4d-765f43db1d4e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:04.077436 master-0 kubenswrapper[28149]: I0313 12:59:04.075839 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:59:04.108349 master-0 kubenswrapper[28149]: I0313 12:59:04.103234 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" event={"ID":"774a19fc-e49b-48fa-ada0-294ef034d30a","Type":"ContainerStarted","Data":"ec69255bc584d0e5bf953dfaede483f2691de9641aa582aa420b928d291dc028"} Mar 13 12:59:04.108349 master-0 kubenswrapper[28149]: I0313 12:59:04.104447 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:59:04.124617 master-0 kubenswrapper[28149]: I0313 12:59:04.124575 28149 generic.go:334] "Generic (PLEG): container finished" podID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerID="11577e2fa27649ff57c812aa1e5f415f0df82cf38672121d4fd2104f2ddb2c74" exitCode=0 Mar 13 12:59:04.124617 master-0 kubenswrapper[28149]: I0313 12:59:04.124610 28149 generic.go:334] "Generic (PLEG): container finished" podID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerID="f4d159b63ea08ed149a2147a2b0c14196fd73509b5490955c8649e50998058a8" exitCode=0 Mar 13 12:59:04.124617 master-0 kubenswrapper[28149]: I0313 12:59:04.124618 28149 generic.go:334] "Generic (PLEG): container finished" podID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerID="8c257806ee764144b97286097976faa619d91e8aaf5fd5e5f387ea61f8a61b3d" exitCode=0 Mar 13 12:59:04.124617 master-0 kubenswrapper[28149]: I0313 12:59:04.124625 28149 generic.go:334] "Generic (PLEG): container finished" podID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerID="45c005ee02d97cdd3fe7139faccbd052b06c1eec52072769631e675eb071ba1c" exitCode=0 Mar 13 12:59:04.124853 master-0 kubenswrapper[28149]: I0313 12:59:04.124632 28149 generic.go:334] "Generic (PLEG): container finished" podID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerID="4a5a6846548f318a60129c0ddbfaffa9fe75d590cd409fdca3fd562ed975c4a4" exitCode=0 Mar 13 12:59:04.124853 master-0 kubenswrapper[28149]: I0313 12:59:04.124638 28149 generic.go:334] "Generic (PLEG): container finished" podID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerID="979968ca203da3fdb2f2540df77bd6489021a783195cb39241718daa02ab6141" exitCode=0 Mar 13 12:59:04.124853 master-0 kubenswrapper[28149]: I0313 12:59:04.124676 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5","Type":"ContainerDied","Data":"11577e2fa27649ff57c812aa1e5f415f0df82cf38672121d4fd2104f2ddb2c74"} Mar 13 12:59:04.124853 master-0 kubenswrapper[28149]: I0313 12:59:04.124704 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5","Type":"ContainerDied","Data":"f4d159b63ea08ed149a2147a2b0c14196fd73509b5490955c8649e50998058a8"} Mar 13 12:59:04.124853 master-0 kubenswrapper[28149]: I0313 12:59:04.124716 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5","Type":"ContainerDied","Data":"8c257806ee764144b97286097976faa619d91e8aaf5fd5e5f387ea61f8a61b3d"} Mar 13 12:59:04.124853 master-0 kubenswrapper[28149]: I0313 12:59:04.124738 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5","Type":"ContainerDied","Data":"45c005ee02d97cdd3fe7139faccbd052b06c1eec52072769631e675eb071ba1c"} Mar 13 12:59:04.124853 master-0 kubenswrapper[28149]: I0313 12:59:04.124750 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5","Type":"ContainerDied","Data":"4a5a6846548f318a60129c0ddbfaffa9fe75d590cd409fdca3fd562ed975c4a4"} Mar 13 12:59:04.124853 master-0 kubenswrapper[28149]: I0313 12:59:04.124759 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5","Type":"ContainerDied","Data":"979968ca203da3fdb2f2540df77bd6489021a783195cb39241718daa02ab6141"} Mar 13 12:59:04.130420 master-0 kubenswrapper[28149]: I0313 12:59:04.126311 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-mpvnf" event={"ID":"cddccab4-91b6-4bcf-a5d3-fd8014dffda6","Type":"ContainerStarted","Data":"2e971663292859aa185cbdfab9fdc386cacda85bf10ae39839b744a594426289"} Mar 13 12:59:04.130420 master-0 kubenswrapper[28149]: I0313 12:59:04.127806 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" Mar 13 12:59:04.136100 master-0 kubenswrapper[28149]: I0313 12:59:04.135895 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b77fb6479-77kfk" podStartSLOduration=39.135868085 podStartE2EDuration="39.135868085s" podCreationTimestamp="2026-03-13 12:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:59:04.130700697 +0000 UTC m=+317.784165856" watchObservedRunningTime="2026-03-13 12:59:04.135868085 +0000 UTC m=+317.789333264" Mar 13 12:59:04.203835 master-0 kubenswrapper[28149]: I0313 12:59:04.203772 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-5cbd49d755-mpvnf" podStartSLOduration=18.111238038 podStartE2EDuration="27.203740838s" podCreationTimestamp="2026-03-13 12:58:37 +0000 UTC" firstStartedPulling="2026-03-13 12:58:53.786604254 +0000 UTC m=+307.440069423" lastFinishedPulling="2026-03-13 12:59:02.879107064 +0000 UTC m=+316.532572223" observedRunningTime="2026-03-13 12:59:04.193465264 +0000 UTC m=+317.846930423" watchObservedRunningTime="2026-03-13 12:59:04.203740838 +0000 UTC m=+317.857206007" Mar 13 12:59:04.473410 master-0 kubenswrapper[28149]: E0313 12:59:04.473255 28149 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 979968ca203da3fdb2f2540df77bd6489021a783195cb39241718daa02ab6141 is running failed: container process not found" containerID="979968ca203da3fdb2f2540df77bd6489021a783195cb39241718daa02ab6141" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Mar 13 12:59:04.475263 master-0 kubenswrapper[28149]: E0313 12:59:04.475220 28149 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 979968ca203da3fdb2f2540df77bd6489021a783195cb39241718daa02ab6141 is running failed: container process not found" containerID="979968ca203da3fdb2f2540df77bd6489021a783195cb39241718daa02ab6141" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Mar 13 12:59:04.476508 master-0 kubenswrapper[28149]: E0313 12:59:04.476471 28149 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 979968ca203da3fdb2f2540df77bd6489021a783195cb39241718daa02ab6141 is running failed: container process not found" containerID="979968ca203da3fdb2f2540df77bd6489021a783195cb39241718daa02ab6141" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Mar 13 12:59:04.476668 master-0 kubenswrapper[28149]: E0313 12:59:04.476639 28149 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 979968ca203da3fdb2f2540df77bd6489021a783195cb39241718daa02ab6141 is running failed: container process not found" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="prometheus" Mar 13 12:59:04.491095 master-0 kubenswrapper[28149]: I0313 12:59:04.485874 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-675489948b-wtbzr" podUID="3f8a5f1b-3890-40cb-9c51-72d9b40142de" containerName="console" containerID="cri-o://64b6f3a87fabc7ac034f8baffba41d793ff43b949fb9a245536d3be2fe4fe012" gracePeriod=15 Mar 13 12:59:05.141003 master-0 kubenswrapper[28149]: I0313 12:59:05.140953 28149 patch_prober.go:28] interesting pod/console-575b4c697b-kjnzx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Mar 13 12:59:05.141530 master-0 kubenswrapper[28149]: I0313 12:59:05.141019 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-575b4c697b-kjnzx" podUID="3c426507-418f-4258-bef6-4206640beb3d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Mar 13 12:59:05.163319 master-0 kubenswrapper[28149]: I0313 12:59:05.153720 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18ffa620-dacc-4b09-be04-2c325f860813" path="/var/lib/kubelet/pods/18ffa620-dacc-4b09-be04-2c325f860813/volumes" Mar 13 12:59:05.163319 master-0 kubenswrapper[28149]: I0313 12:59:05.155722 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f3bb9a1-578c-424d-8610-272c76bf0a31" path="/var/lib/kubelet/pods/2f3bb9a1-578c-424d-8610-272c76bf0a31/volumes" Mar 13 12:59:05.163319 master-0 kubenswrapper[28149]: I0313 12:59:05.158874 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a454234a-6c8e-4916-81e8-c9e66cec9d31" path="/var/lib/kubelet/pods/a454234a-6c8e-4916-81e8-c9e66cec9d31/volumes" Mar 13 12:59:05.163319 master-0 kubenswrapper[28149]: I0313 12:59:05.159737 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff344520-bb09-4f16-82be-273378ab0663" path="/var/lib/kubelet/pods/ff344520-bb09-4f16-82be-273378ab0663/volumes" Mar 13 12:59:05.164856 master-0 kubenswrapper[28149]: I0313 12:59:05.164825 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:59:05.164949 master-0 kubenswrapper[28149]: I0313 12:59:05.164868 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:59:05.164949 master-0 kubenswrapper[28149]: I0313 12:59:05.164909 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-84f57b9877-x5rrn" Mar 13 12:59:05.177057 master-0 kubenswrapper[28149]: I0313 12:59:05.177007 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" event={"ID":"0cc3e582-adba-4520-aa5a-c999333ae0fc","Type":"ContainerStarted","Data":"3c9b9e7637020b529749be0b9cf6bc5abf4ffe725a207f78f03e6ad9e9959002"} Mar 13 12:59:05.180896 master-0 kubenswrapper[28149]: I0313 12:59:05.179689 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:59:05.180896 master-0 kubenswrapper[28149]: I0313 12:59:05.180277 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-675489948b-wtbzr_3f8a5f1b-3890-40cb-9c51-72d9b40142de/console/0.log" Mar 13 12:59:05.180896 master-0 kubenswrapper[28149]: I0313 12:59:05.180304 28149 generic.go:334] "Generic (PLEG): container finished" podID="3f8a5f1b-3890-40cb-9c51-72d9b40142de" containerID="64b6f3a87fabc7ac034f8baffba41d793ff43b949fb9a245536d3be2fe4fe012" exitCode=2 Mar 13 12:59:05.180896 master-0 kubenswrapper[28149]: I0313 12:59:05.180396 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-675489948b-wtbzr" event={"ID":"3f8a5f1b-3890-40cb-9c51-72d9b40142de","Type":"ContainerDied","Data":"64b6f3a87fabc7ac034f8baffba41d793ff43b949fb9a245536d3be2fe4fe012"} Mar 13 12:59:05.423343 master-0 kubenswrapper[28149]: I0313 12:59:05.376033 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" podStartSLOduration=56.375997972 podStartE2EDuration="56.375997972s" podCreationTimestamp="2026-03-13 12:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:59:05.331801552 +0000 UTC m=+318.985266711" watchObservedRunningTime="2026-03-13 12:59:05.375997972 +0000 UTC m=+319.029463131" Mar 13 12:59:05.759180 master-0 kubenswrapper[28149]: I0313 12:59:05.757331 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:05.842549 master-0 kubenswrapper[28149]: I0313 12:59:05.836390 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900424 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900484 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-kube-rbac-proxy\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900505 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-kubelet-serving-ca-bundle\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900530 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-grpc-tls\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900549 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-metrics-client-certs\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900576 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-k8s-db\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900597 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-tls-assets\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900611 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-thanos-prometheus-http-client-file\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900654 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-config-out\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900685 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-serving-certs-ca-bundle\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900709 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-web-config\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900730 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900750 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-k8s-rulefiles-0\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900784 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900807 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-trusted-ca-bundle\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900828 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-metrics-client-ca\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900864 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-config\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.901870 master-0 kubenswrapper[28149]: I0313 12:59:05.900882 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqsmk\" (UniqueName: \"kubernetes.io/projected/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-kube-api-access-zqsmk\") pod \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\" (UID: \"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5\") " Mar 13 12:59:05.903232 master-0 kubenswrapper[28149]: I0313 12:59:05.902986 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:05.903667 master-0 kubenswrapper[28149]: I0313 12:59:05.903440 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:05.920194 master-0 kubenswrapper[28149]: I0313 12:59:05.908763 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:05.920194 master-0 kubenswrapper[28149]: I0313 12:59:05.910294 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:05.920194 master-0 kubenswrapper[28149]: I0313 12:59:05.912444 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:05.920194 master-0 kubenswrapper[28149]: I0313 12:59:05.915819 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:59:05.920194 master-0 kubenswrapper[28149]: I0313 12:59:05.918333 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:59:05.927448 master-0 kubenswrapper[28149]: I0313 12:59:05.921229 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-config-out" (OuterVolumeSpecName: "config-out") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:59:05.927448 master-0 kubenswrapper[28149]: I0313 12:59:05.925534 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:05.927448 master-0 kubenswrapper[28149]: I0313 12:59:05.925680 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:05.927448 master-0 kubenswrapper[28149]: I0313 12:59:05.927368 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:05.943723 master-0 kubenswrapper[28149]: I0313 12:59:05.932759 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:05.943723 master-0 kubenswrapper[28149]: I0313 12:59:05.935276 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:05.943723 master-0 kubenswrapper[28149]: I0313 12:59:05.936372 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:05.943723 master-0 kubenswrapper[28149]: I0313 12:59:05.936479 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-kube-api-access-zqsmk" (OuterVolumeSpecName: "kube-api-access-zqsmk") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "kube-api-access-zqsmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:59:05.943723 master-0 kubenswrapper[28149]: I0313 12:59:05.936683 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:05.944304 master-0 kubenswrapper[28149]: I0313 12:59:05.944251 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-config" (OuterVolumeSpecName: "config") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:05.996559 master-0 kubenswrapper[28149]: I0313 12:59:05.996435 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-web-config" (OuterVolumeSpecName: "web-config") pod "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" (UID: "4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:06.008228 master-0 kubenswrapper[28149]: I0313 12:59:06.007324 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7bcd698648-pfnwr" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009031 28149 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009075 28149 reconciler_common.go:293] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009091 28149 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009107 28149 reconciler_common.go:293] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-grpc-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009121 28149 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009157 28149 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-k8s-db\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009171 28149 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-tls-assets\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009181 28149 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-thanos-prometheus-http-client-file\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009191 28149 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-config-out\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009203 28149 reconciler_common.go:293] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009228 28149 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-web-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009240 28149 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009253 28149 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-k8s-rulefiles-0\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009263 28149 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009284 28149 reconciler_common.go:293] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-prometheus-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009295 28149 reconciler_common.go:293] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-configmap-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009306 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.010185 master-0 kubenswrapper[28149]: I0313 12:59:06.009315 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqsmk\" (UniqueName: \"kubernetes.io/projected/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5-kube-api-access-zqsmk\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.116648 master-0 kubenswrapper[28149]: I0313 12:59:06.113568 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-675489948b-wtbzr_3f8a5f1b-3890-40cb-9c51-72d9b40142de/console/0.log" Mar 13 12:59:06.116648 master-0 kubenswrapper[28149]: I0313 12:59:06.113667 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:59:06.198236 master-0 kubenswrapper[28149]: I0313 12:59:06.198192 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"51320e4b-751c-4bd4-8a4d-765f43db1d4e","Type":"ContainerStarted","Data":"50bd644119c99f0626e86bebacd8239290237423a0cc5d1caf39cd3190c6c158"} Mar 13 12:59:06.198236 master-0 kubenswrapper[28149]: I0313 12:59:06.198238 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"51320e4b-751c-4bd4-8a4d-765f43db1d4e","Type":"ContainerStarted","Data":"c408cca47bf2252db12719bbca27f1249b9ba519473bd6bc7f3fd46721721efc"} Mar 13 12:59:06.207681 master-0 kubenswrapper[28149]: I0313 12:59:06.207621 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5","Type":"ContainerDied","Data":"25e9965d9cd9530fd6c5da4faa88c4c90076637d46295cad4ad34912005995a9"} Mar 13 12:59:06.207681 master-0 kubenswrapper[28149]: I0313 12:59:06.207668 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.207837 master-0 kubenswrapper[28149]: I0313 12:59:06.207712 28149 scope.go:117] "RemoveContainer" containerID="11577e2fa27649ff57c812aa1e5f415f0df82cf38672121d4fd2104f2ddb2c74" Mar 13 12:59:06.209872 master-0 kubenswrapper[28149]: I0313 12:59:06.209832 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-675489948b-wtbzr_3f8a5f1b-3890-40cb-9c51-72d9b40142de/console/0.log" Mar 13 12:59:06.210216 master-0 kubenswrapper[28149]: I0313 12:59:06.210174 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-675489948b-wtbzr" event={"ID":"3f8a5f1b-3890-40cb-9c51-72d9b40142de","Type":"ContainerDied","Data":"b29acac6f7c6baac69f815e3eb7b78ced1d1992717ad6bf43b68f0f6df3ee3f8"} Mar 13 12:59:06.210273 master-0 kubenswrapper[28149]: I0313 12:59:06.210258 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-675489948b-wtbzr" Mar 13 12:59:06.211503 master-0 kubenswrapper[28149]: I0313 12:59:06.211453 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-config\") pod \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " Mar 13 12:59:06.211570 master-0 kubenswrapper[28149]: I0313 12:59:06.211548 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-oauth-serving-cert\") pod \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " Mar 13 12:59:06.211682 master-0 kubenswrapper[28149]: I0313 12:59:06.211638 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-service-ca\") pod \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " Mar 13 12:59:06.211745 master-0 kubenswrapper[28149]: I0313 12:59:06.211718 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-serving-cert\") pod \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " Mar 13 12:59:06.211801 master-0 kubenswrapper[28149]: I0313 12:59:06.211780 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-oauth-config\") pod \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " Mar 13 12:59:06.211834 master-0 kubenswrapper[28149]: I0313 12:59:06.211819 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-trusted-ca-bundle\") pod \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " Mar 13 12:59:06.212505 master-0 kubenswrapper[28149]: I0313 12:59:06.211897 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h4jl\" (UniqueName: \"kubernetes.io/projected/3f8a5f1b-3890-40cb-9c51-72d9b40142de-kube-api-access-2h4jl\") pod \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\" (UID: \"3f8a5f1b-3890-40cb-9c51-72d9b40142de\") " Mar 13 12:59:06.212731 master-0 kubenswrapper[28149]: I0313 12:59:06.212457 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-service-ca" (OuterVolumeSpecName: "service-ca") pod "3f8a5f1b-3890-40cb-9c51-72d9b40142de" (UID: "3f8a5f1b-3890-40cb-9c51-72d9b40142de"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:06.213089 master-0 kubenswrapper[28149]: I0313 12:59:06.213049 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-config" (OuterVolumeSpecName: "console-config") pod "3f8a5f1b-3890-40cb-9c51-72d9b40142de" (UID: "3f8a5f1b-3890-40cb-9c51-72d9b40142de"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:06.213556 master-0 kubenswrapper[28149]: I0313 12:59:06.213521 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3f8a5f1b-3890-40cb-9c51-72d9b40142de" (UID: "3f8a5f1b-3890-40cb-9c51-72d9b40142de"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:06.218050 master-0 kubenswrapper[28149]: I0313 12:59:06.215383 28149 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.218050 master-0 kubenswrapper[28149]: I0313 12:59:06.215409 28149 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.218050 master-0 kubenswrapper[28149]: I0313 12:59:06.215420 28149 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.218050 master-0 kubenswrapper[28149]: I0313 12:59:06.215937 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3f8a5f1b-3890-40cb-9c51-72d9b40142de" (UID: "3f8a5f1b-3890-40cb-9c51-72d9b40142de"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:06.220899 master-0 kubenswrapper[28149]: I0313 12:59:06.220371 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3f8a5f1b-3890-40cb-9c51-72d9b40142de" (UID: "3f8a5f1b-3890-40cb-9c51-72d9b40142de"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:06.220899 master-0 kubenswrapper[28149]: I0313 12:59:06.220575 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f8a5f1b-3890-40cb-9c51-72d9b40142de-kube-api-access-2h4jl" (OuterVolumeSpecName: "kube-api-access-2h4jl") pod "3f8a5f1b-3890-40cb-9c51-72d9b40142de" (UID: "3f8a5f1b-3890-40cb-9c51-72d9b40142de"). InnerVolumeSpecName "kube-api-access-2h4jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:59:06.229386 master-0 kubenswrapper[28149]: I0313 12:59:06.229320 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3f8a5f1b-3890-40cb-9c51-72d9b40142de" (UID: "3f8a5f1b-3890-40cb-9c51-72d9b40142de"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:06.244928 master-0 kubenswrapper[28149]: I0313 12:59:06.244882 28149 scope.go:117] "RemoveContainer" containerID="f4d159b63ea08ed149a2147a2b0c14196fd73509b5490955c8649e50998058a8" Mar 13 12:59:06.287781 master-0 kubenswrapper[28149]: I0313 12:59:06.287568 28149 scope.go:117] "RemoveContainer" containerID="8c257806ee764144b97286097976faa619d91e8aaf5fd5e5f387ea61f8a61b3d" Mar 13 12:59:06.300785 master-0 kubenswrapper[28149]: I0313 12:59:06.300731 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:59:06.314051 master-0 kubenswrapper[28149]: I0313 12:59:06.313904 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:59:06.317769 master-0 kubenswrapper[28149]: I0313 12:59:06.317714 28149 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.317769 master-0 kubenswrapper[28149]: I0313 12:59:06.317768 28149 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f8a5f1b-3890-40cb-9c51-72d9b40142de-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.317971 master-0 kubenswrapper[28149]: I0313 12:59:06.317781 28149 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f8a5f1b-3890-40cb-9c51-72d9b40142de-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.317971 master-0 kubenswrapper[28149]: I0313 12:59:06.317794 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h4jl\" (UniqueName: \"kubernetes.io/projected/3f8a5f1b-3890-40cb-9c51-72d9b40142de-kube-api-access-2h4jl\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:06.337023 master-0 kubenswrapper[28149]: I0313 12:59:06.336991 28149 scope.go:117] "RemoveContainer" containerID="45c005ee02d97cdd3fe7139faccbd052b06c1eec52072769631e675eb071ba1c" Mar 13 12:59:06.340516 master-0 kubenswrapper[28149]: I0313 12:59:06.340461 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:59:06.340793 master-0 kubenswrapper[28149]: E0313 12:59:06.340760 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="kube-rbac-proxy-thanos" Mar 13 12:59:06.340793 master-0 kubenswrapper[28149]: I0313 12:59:06.340779 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="kube-rbac-proxy-thanos" Mar 13 12:59:06.340931 master-0 kubenswrapper[28149]: E0313 12:59:06.340801 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="thanos-sidecar" Mar 13 12:59:06.340931 master-0 kubenswrapper[28149]: I0313 12:59:06.340807 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="thanos-sidecar" Mar 13 12:59:06.340931 master-0 kubenswrapper[28149]: E0313 12:59:06.340825 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="init-config-reloader" Mar 13 12:59:06.340931 master-0 kubenswrapper[28149]: I0313 12:59:06.340831 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="init-config-reloader" Mar 13 12:59:06.340931 master-0 kubenswrapper[28149]: E0313 12:59:06.340847 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="config-reloader" Mar 13 12:59:06.340931 master-0 kubenswrapper[28149]: I0313 12:59:06.340853 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="config-reloader" Mar 13 12:59:06.340931 master-0 kubenswrapper[28149]: E0313 12:59:06.340862 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="kube-rbac-proxy" Mar 13 12:59:06.340931 master-0 kubenswrapper[28149]: I0313 12:59:06.340868 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="kube-rbac-proxy" Mar 13 12:59:06.340931 master-0 kubenswrapper[28149]: E0313 12:59:06.340878 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="kube-rbac-proxy-web" Mar 13 12:59:06.340931 master-0 kubenswrapper[28149]: I0313 12:59:06.340884 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="kube-rbac-proxy-web" Mar 13 12:59:06.340931 master-0 kubenswrapper[28149]: E0313 12:59:06.340892 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8a5f1b-3890-40cb-9c51-72d9b40142de" containerName="console" Mar 13 12:59:06.340931 master-0 kubenswrapper[28149]: I0313 12:59:06.340897 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8a5f1b-3890-40cb-9c51-72d9b40142de" containerName="console" Mar 13 12:59:06.340931 master-0 kubenswrapper[28149]: E0313 12:59:06.340908 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="prometheus" Mar 13 12:59:06.340931 master-0 kubenswrapper[28149]: I0313 12:59:06.340914 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="prometheus" Mar 13 12:59:06.341488 master-0 kubenswrapper[28149]: I0313 12:59:06.341030 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="prometheus" Mar 13 12:59:06.341488 master-0 kubenswrapper[28149]: I0313 12:59:06.341038 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="thanos-sidecar" Mar 13 12:59:06.341488 master-0 kubenswrapper[28149]: I0313 12:59:06.341053 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="config-reloader" Mar 13 12:59:06.341488 master-0 kubenswrapper[28149]: I0313 12:59:06.341061 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8a5f1b-3890-40cb-9c51-72d9b40142de" containerName="console" Mar 13 12:59:06.341488 master-0 kubenswrapper[28149]: I0313 12:59:06.341080 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="kube-rbac-proxy-web" Mar 13 12:59:06.341488 master-0 kubenswrapper[28149]: I0313 12:59:06.341090 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="kube-rbac-proxy-thanos" Mar 13 12:59:06.341488 master-0 kubenswrapper[28149]: I0313 12:59:06.341100 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" containerName="kube-rbac-proxy" Mar 13 12:59:06.343532 master-0 kubenswrapper[28149]: I0313 12:59:06.343498 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.349041 master-0 kubenswrapper[28149]: I0313 12:59:06.348981 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 13 12:59:06.349239 master-0 kubenswrapper[28149]: I0313 12:59:06.349170 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 13 12:59:06.349553 master-0 kubenswrapper[28149]: I0313 12:59:06.349516 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 13 12:59:06.350267 master-0 kubenswrapper[28149]: I0313 12:59:06.350128 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 13 12:59:06.350540 master-0 kubenswrapper[28149]: I0313 12:59:06.350510 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 13 12:59:06.351829 master-0 kubenswrapper[28149]: I0313 12:59:06.351799 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 13 12:59:06.353085 master-0 kubenswrapper[28149]: I0313 12:59:06.352185 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6lsjijes9o2i6" Mar 13 12:59:06.356128 master-0 kubenswrapper[28149]: I0313 12:59:06.356085 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-htccs" Mar 13 12:59:06.356269 master-0 kubenswrapper[28149]: I0313 12:59:06.356177 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 13 12:59:06.356337 master-0 kubenswrapper[28149]: I0313 12:59:06.356279 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 13 12:59:06.356337 master-0 kubenswrapper[28149]: I0313 12:59:06.356108 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 13 12:59:06.359459 master-0 kubenswrapper[28149]: I0313 12:59:06.359414 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 13 12:59:06.360501 master-0 kubenswrapper[28149]: I0313 12:59:06.360452 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 13 12:59:06.374976 master-0 kubenswrapper[28149]: I0313 12:59:06.374722 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:59:06.384255 master-0 kubenswrapper[28149]: I0313 12:59:06.384046 28149 scope.go:117] "RemoveContainer" containerID="4a5a6846548f318a60129c0ddbfaffa9fe75d590cd409fdca3fd562ed975c4a4" Mar 13 12:59:06.398905 master-0 kubenswrapper[28149]: I0313 12:59:06.398875 28149 scope.go:117] "RemoveContainer" containerID="979968ca203da3fdb2f2540df77bd6489021a783195cb39241718daa02ab6141" Mar 13 12:59:06.419192 master-0 kubenswrapper[28149]: I0313 12:59:06.419118 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-config-out\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419384 master-0 kubenswrapper[28149]: I0313 12:59:06.419199 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419384 master-0 kubenswrapper[28149]: I0313 12:59:06.419238 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419384 master-0 kubenswrapper[28149]: I0313 12:59:06.419268 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419384 master-0 kubenswrapper[28149]: I0313 12:59:06.419312 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419384 master-0 kubenswrapper[28149]: I0313 12:59:06.419340 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gns8\" (UniqueName: \"kubernetes.io/projected/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-kube-api-access-7gns8\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419384 master-0 kubenswrapper[28149]: I0313 12:59:06.419372 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419750 master-0 kubenswrapper[28149]: I0313 12:59:06.419413 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419750 master-0 kubenswrapper[28149]: I0313 12:59:06.419443 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-web-config\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419750 master-0 kubenswrapper[28149]: I0313 12:59:06.419484 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419750 master-0 kubenswrapper[28149]: I0313 12:59:06.419512 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419750 master-0 kubenswrapper[28149]: I0313 12:59:06.419541 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419750 master-0 kubenswrapper[28149]: I0313 12:59:06.419564 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419750 master-0 kubenswrapper[28149]: I0313 12:59:06.419602 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419750 master-0 kubenswrapper[28149]: I0313 12:59:06.419637 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419750 master-0 kubenswrapper[28149]: I0313 12:59:06.419670 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419750 master-0 kubenswrapper[28149]: I0313 12:59:06.419703 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-config\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.419750 master-0 kubenswrapper[28149]: I0313 12:59:06.419740 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.422712 master-0 kubenswrapper[28149]: I0313 12:59:06.422682 28149 scope.go:117] "RemoveContainer" containerID="00f686895227a890c5dc3f1b90164c1cae4d30c4ee1ff237e7887f18ad31bb8d" Mar 13 12:59:06.443505 master-0 kubenswrapper[28149]: I0313 12:59:06.443334 28149 scope.go:117] "RemoveContainer" containerID="64b6f3a87fabc7ac034f8baffba41d793ff43b949fb9a245536d3be2fe4fe012" Mar 13 12:59:06.521630 master-0 kubenswrapper[28149]: I0313 12:59:06.521377 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.521630 master-0 kubenswrapper[28149]: I0313 12:59:06.521448 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.521630 master-0 kubenswrapper[28149]: I0313 12:59:06.521483 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.521630 master-0 kubenswrapper[28149]: I0313 12:59:06.521511 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-config\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.521630 master-0 kubenswrapper[28149]: I0313 12:59:06.521542 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.521630 master-0 kubenswrapper[28149]: I0313 12:59:06.521570 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-config-out\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.522386 master-0 kubenswrapper[28149]: I0313 12:59:06.522341 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.522459 master-0 kubenswrapper[28149]: I0313 12:59:06.522425 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.522496 master-0 kubenswrapper[28149]: I0313 12:59:06.522465 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.522564 master-0 kubenswrapper[28149]: I0313 12:59:06.522541 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.522596 master-0 kubenswrapper[28149]: I0313 12:59:06.522586 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gns8\" (UniqueName: \"kubernetes.io/projected/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-kube-api-access-7gns8\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.522680 master-0 kubenswrapper[28149]: I0313 12:59:06.522660 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.522782 master-0 kubenswrapper[28149]: I0313 12:59:06.522760 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.522825 master-0 kubenswrapper[28149]: I0313 12:59:06.522810 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-web-config\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.522917 master-0 kubenswrapper[28149]: I0313 12:59:06.522894 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.522957 master-0 kubenswrapper[28149]: I0313 12:59:06.522943 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.522991 master-0 kubenswrapper[28149]: I0313 12:59:06.522981 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.523035 master-0 kubenswrapper[28149]: I0313 12:59:06.523016 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.523277 master-0 kubenswrapper[28149]: I0313 12:59:06.523253 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.523739 master-0 kubenswrapper[28149]: I0313 12:59:06.523717 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.524238 master-0 kubenswrapper[28149]: I0313 12:59:06.524199 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.525019 master-0 kubenswrapper[28149]: I0313 12:59:06.524989 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.525591 master-0 kubenswrapper[28149]: I0313 12:59:06.525512 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-config-out\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.525859 master-0 kubenswrapper[28149]: I0313 12:59:06.525799 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.525954 master-0 kubenswrapper[28149]: I0313 12:59:06.525905 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.527414 master-0 kubenswrapper[28149]: I0313 12:59:06.527357 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.528399 master-0 kubenswrapper[28149]: I0313 12:59:06.528341 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.528911 master-0 kubenswrapper[28149]: I0313 12:59:06.528884 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.530508 master-0 kubenswrapper[28149]: I0313 12:59:06.529870 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.530594 master-0 kubenswrapper[28149]: I0313 12:59:06.530503 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.530768 master-0 kubenswrapper[28149]: I0313 12:59:06.530728 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-config\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.531215 master-0 kubenswrapper[28149]: I0313 12:59:06.531189 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-web-config\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.531871 master-0 kubenswrapper[28149]: I0313 12:59:06.531648 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.536407 master-0 kubenswrapper[28149]: I0313 12:59:06.536374 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.537392 master-0 kubenswrapper[28149]: I0313 12:59:06.537306 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.545797 master-0 kubenswrapper[28149]: I0313 12:59:06.545753 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gns8\" (UniqueName: \"kubernetes.io/projected/90f7e4ab-cd47-4a66-95f1-f6fa11ea282e-kube-api-access-7gns8\") pod \"prometheus-k8s-0\" (UID: \"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.570385 master-0 kubenswrapper[28149]: I0313 12:59:06.570306 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-675489948b-wtbzr"] Mar 13 12:59:06.577774 master-0 kubenswrapper[28149]: I0313 12:59:06.577715 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-675489948b-wtbzr"] Mar 13 12:59:06.675575 master-0 kubenswrapper[28149]: I0313 12:59:06.675523 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:06.706923 master-0 kubenswrapper[28149]: I0313 12:59:06.706862 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f8a5f1b-3890-40cb-9c51-72d9b40142de" path="/var/lib/kubelet/pods/3f8a5f1b-3890-40cb-9c51-72d9b40142de/volumes" Mar 13 12:59:06.707545 master-0 kubenswrapper[28149]: I0313 12:59:06.707512 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5" path="/var/lib/kubelet/pods/4e6b4a1c-92b3-4b72-9f4b-1fd7814125b5/volumes" Mar 13 12:59:07.107466 master-0 kubenswrapper[28149]: I0313 12:59:07.107338 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:59:07.230459 master-0 kubenswrapper[28149]: I0313 12:59:07.230397 28149 generic.go:334] "Generic (PLEG): container finished" podID="51320e4b-751c-4bd4-8a4d-765f43db1d4e" containerID="50bd644119c99f0626e86bebacd8239290237423a0cc5d1caf39cd3190c6c158" exitCode=0 Mar 13 12:59:07.231083 master-0 kubenswrapper[28149]: I0313 12:59:07.230465 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"51320e4b-751c-4bd4-8a4d-765f43db1d4e","Type":"ContainerDied","Data":"50bd644119c99f0626e86bebacd8239290237423a0cc5d1caf39cd3190c6c158"} Mar 13 12:59:07.232972 master-0 kubenswrapper[28149]: I0313 12:59:07.232911 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e","Type":"ContainerStarted","Data":"ab1c70faf861345364760da280ddc71da337583a684179d300adce41c4478924"} Mar 13 12:59:08.247643 master-0 kubenswrapper[28149]: I0313 12:59:08.247591 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"51320e4b-751c-4bd4-8a4d-765f43db1d4e","Type":"ContainerStarted","Data":"ce641dcacccd6b999df644f06e2e9b2ab4d036805e182f0d81ec61fe5be2fc51"} Mar 13 12:59:08.247643 master-0 kubenswrapper[28149]: I0313 12:59:08.247637 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"51320e4b-751c-4bd4-8a4d-765f43db1d4e","Type":"ContainerStarted","Data":"7811abb699de3fe21eb48b1991324061383abf145e34de73d34d61d373542dd9"} Mar 13 12:59:08.247643 master-0 kubenswrapper[28149]: I0313 12:59:08.247648 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"51320e4b-751c-4bd4-8a4d-765f43db1d4e","Type":"ContainerStarted","Data":"2e21e451670d1ef8133530accd79d159568777ab9f090bc5e6e83ac5eea0e623"} Mar 13 12:59:08.247643 master-0 kubenswrapper[28149]: I0313 12:59:08.247656 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"51320e4b-751c-4bd4-8a4d-765f43db1d4e","Type":"ContainerStarted","Data":"9add5c71ac97acfd21ca2901f39e10447f9e2b4b71acaa3da3d64f15cb5a5fe7"} Mar 13 12:59:08.249770 master-0 kubenswrapper[28149]: I0313 12:59:08.249720 28149 generic.go:334] "Generic (PLEG): container finished" podID="90f7e4ab-cd47-4a66-95f1-f6fa11ea282e" containerID="5e61a2b872a48d9b45fe4e9bf2b9fa2370e70c08d3b0d142a7209970293c08bb" exitCode=0 Mar 13 12:59:08.249770 master-0 kubenswrapper[28149]: I0313 12:59:08.249774 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e","Type":"ContainerDied","Data":"5e61a2b872a48d9b45fe4e9bf2b9fa2370e70c08d3b0d142a7209970293c08bb"} Mar 13 12:59:09.260504 master-0 kubenswrapper[28149]: I0313 12:59:09.260387 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"51320e4b-751c-4bd4-8a4d-765f43db1d4e","Type":"ContainerStarted","Data":"603f6e8786116aea7c5bee8e8d37009fba3eed1137728d48bc8319b5b3bd5146"} Mar 13 12:59:09.262275 master-0 kubenswrapper[28149]: I0313 12:59:09.262218 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e","Type":"ContainerStarted","Data":"791afc42a9c5f4bb04eca3dc0bfb62b4521377320de0ae0d77f97a433909283e"} Mar 13 12:59:10.390086 master-0 kubenswrapper[28149]: I0313 12:59:10.389669 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj"] Mar 13 12:59:10.403181 master-0 kubenswrapper[28149]: I0313 12:59:10.402314 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:10.411181 master-0 kubenswrapper[28149]: I0313 12:59:10.411110 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 12:59:10.411660 master-0 kubenswrapper[28149]: I0313 12:59:10.411635 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 12:59:10.411952 master-0 kubenswrapper[28149]: I0313 12:59:10.411910 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 12:59:10.411994 master-0 kubenswrapper[28149]: I0313 12:59:10.411943 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 12:59:10.411994 master-0 kubenswrapper[28149]: I0313 12:59:10.411963 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 12:59:10.412228 master-0 kubenswrapper[28149]: I0313 12:59:10.412205 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-sk2p7" Mar 13 12:59:10.500317 master-0 kubenswrapper[28149]: I0313 12:59:10.500245 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03065d4c-e103-4f7a-832f-2dc8d6be901d-serving-cert\") pod \"route-controller-manager-5d954c56fb-44hqj\" (UID: \"03065d4c-e103-4f7a-832f-2dc8d6be901d\") " pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:10.500525 master-0 kubenswrapper[28149]: I0313 12:59:10.500347 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03065d4c-e103-4f7a-832f-2dc8d6be901d-client-ca\") pod \"route-controller-manager-5d954c56fb-44hqj\" (UID: \"03065d4c-e103-4f7a-832f-2dc8d6be901d\") " pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:10.500525 master-0 kubenswrapper[28149]: I0313 12:59:10.500407 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03065d4c-e103-4f7a-832f-2dc8d6be901d-config\") pod \"route-controller-manager-5d954c56fb-44hqj\" (UID: \"03065d4c-e103-4f7a-832f-2dc8d6be901d\") " pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:10.500525 master-0 kubenswrapper[28149]: I0313 12:59:10.500461 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xf88\" (UniqueName: \"kubernetes.io/projected/03065d4c-e103-4f7a-832f-2dc8d6be901d-kube-api-access-5xf88\") pod \"route-controller-manager-5d954c56fb-44hqj\" (UID: \"03065d4c-e103-4f7a-832f-2dc8d6be901d\") " pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:10.601983 master-0 kubenswrapper[28149]: I0313 12:59:10.601864 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03065d4c-e103-4f7a-832f-2dc8d6be901d-serving-cert\") pod \"route-controller-manager-5d954c56fb-44hqj\" (UID: \"03065d4c-e103-4f7a-832f-2dc8d6be901d\") " pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:10.601983 master-0 kubenswrapper[28149]: I0313 12:59:10.601936 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03065d4c-e103-4f7a-832f-2dc8d6be901d-client-ca\") pod \"route-controller-manager-5d954c56fb-44hqj\" (UID: \"03065d4c-e103-4f7a-832f-2dc8d6be901d\") " pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:10.602265 master-0 kubenswrapper[28149]: I0313 12:59:10.602095 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03065d4c-e103-4f7a-832f-2dc8d6be901d-config\") pod \"route-controller-manager-5d954c56fb-44hqj\" (UID: \"03065d4c-e103-4f7a-832f-2dc8d6be901d\") " pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:10.602384 master-0 kubenswrapper[28149]: I0313 12:59:10.602330 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xf88\" (UniqueName: \"kubernetes.io/projected/03065d4c-e103-4f7a-832f-2dc8d6be901d-kube-api-access-5xf88\") pod \"route-controller-manager-5d954c56fb-44hqj\" (UID: \"03065d4c-e103-4f7a-832f-2dc8d6be901d\") " pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:10.603223 master-0 kubenswrapper[28149]: I0313 12:59:10.603186 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03065d4c-e103-4f7a-832f-2dc8d6be901d-client-ca\") pod \"route-controller-manager-5d954c56fb-44hqj\" (UID: \"03065d4c-e103-4f7a-832f-2dc8d6be901d\") " pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:10.604003 master-0 kubenswrapper[28149]: I0313 12:59:10.603426 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03065d4c-e103-4f7a-832f-2dc8d6be901d-config\") pod \"route-controller-manager-5d954c56fb-44hqj\" (UID: \"03065d4c-e103-4f7a-832f-2dc8d6be901d\") " pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:10.605708 master-0 kubenswrapper[28149]: I0313 12:59:10.605687 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03065d4c-e103-4f7a-832f-2dc8d6be901d-serving-cert\") pod \"route-controller-manager-5d954c56fb-44hqj\" (UID: \"03065d4c-e103-4f7a-832f-2dc8d6be901d\") " pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:11.075916 master-0 kubenswrapper[28149]: I0313 12:59:11.075836 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj"] Mar 13 12:59:11.280569 master-0 kubenswrapper[28149]: I0313 12:59:11.280360 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e","Type":"ContainerStarted","Data":"3e82c8f0ce412098975bee50ab6d1b4a9ab1e918255a3fcfef693772dcfed118"} Mar 13 12:59:11.280569 master-0 kubenswrapper[28149]: I0313 12:59:11.280428 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e","Type":"ContainerStarted","Data":"a1ca4197616ac366097743f64a379af01c1e29d4db6588a8d6f629bdf0b3418a"} Mar 13 12:59:11.285299 master-0 kubenswrapper[28149]: I0313 12:59:11.285264 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"51320e4b-751c-4bd4-8a4d-765f43db1d4e","Type":"ContainerStarted","Data":"a3189571c455fd8767727e981f10c0ef8f34217ce3c7dea6a1562a854ef424ec"} Mar 13 12:59:11.424394 master-0 kubenswrapper[28149]: I0313 12:59:11.424301 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xf88\" (UniqueName: \"kubernetes.io/projected/03065d4c-e103-4f7a-832f-2dc8d6be901d-kube-api-access-5xf88\") pod \"route-controller-manager-5d954c56fb-44hqj\" (UID: \"03065d4c-e103-4f7a-832f-2dc8d6be901d\") " pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:11.633682 master-0 kubenswrapper[28149]: I0313 12:59:11.633637 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:11.845445 master-0 kubenswrapper[28149]: E0313 12:59:11.845392 28149 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc192c03_5aec_4507_a702_56bf98c96e9c.slice/crio-6446a8dda38eb9740b431e3cbbce0e66637311ae9d8e6bde203aefb67d8183fd.scope\": RecentStats: unable to find data in memory cache]" Mar 13 12:59:12.310117 master-0 kubenswrapper[28149]: I0313 12:59:12.309984 28149 generic.go:334] "Generic (PLEG): container finished" podID="fc192c03-5aec-4507-a702-56bf98c96e9c" containerID="6446a8dda38eb9740b431e3cbbce0e66637311ae9d8e6bde203aefb67d8183fd" exitCode=0 Mar 13 12:59:12.310117 master-0 kubenswrapper[28149]: I0313 12:59:12.310052 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" event={"ID":"fc192c03-5aec-4507-a702-56bf98c96e9c","Type":"ContainerDied","Data":"6446a8dda38eb9740b431e3cbbce0e66637311ae9d8e6bde203aefb67d8183fd"} Mar 13 12:59:12.314660 master-0 kubenswrapper[28149]: I0313 12:59:12.314605 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e","Type":"ContainerStarted","Data":"49375a9af5f6efc2aff78b7dc2bfc5a81d6c58919e990fb1fe8fb855435d85a5"} Mar 13 12:59:13.010935 master-0 kubenswrapper[28149]: I0313 12:59:13.010882 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:59:13.065070 master-0 kubenswrapper[28149]: I0313 12:59:13.064930 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c69h2\" (UniqueName: \"kubernetes.io/projected/fc192c03-5aec-4507-a702-56bf98c96e9c-kube-api-access-c69h2\") pod \"fc192c03-5aec-4507-a702-56bf98c96e9c\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " Mar 13 12:59:13.065440 master-0 kubenswrapper[28149]: I0313 12:59:13.065358 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-server-tls\") pod \"fc192c03-5aec-4507-a702-56bf98c96e9c\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " Mar 13 12:59:13.065750 master-0 kubenswrapper[28149]: I0313 12:59:13.065653 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-client-certs\") pod \"fc192c03-5aec-4507-a702-56bf98c96e9c\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " Mar 13 12:59:13.065750 master-0 kubenswrapper[28149]: I0313 12:59:13.065712 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/fc192c03-5aec-4507-a702-56bf98c96e9c-audit-log\") pod \"fc192c03-5aec-4507-a702-56bf98c96e9c\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " Mar 13 12:59:13.066299 master-0 kubenswrapper[28149]: I0313 12:59:13.066073 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-configmap-kubelet-serving-ca-bundle\") pod \"fc192c03-5aec-4507-a702-56bf98c96e9c\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " Mar 13 12:59:13.066748 master-0 kubenswrapper[28149]: I0313 12:59:13.066290 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-client-ca-bundle\") pod \"fc192c03-5aec-4507-a702-56bf98c96e9c\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " Mar 13 12:59:13.066748 master-0 kubenswrapper[28149]: I0313 12:59:13.066621 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-metrics-server-audit-profiles\") pod \"fc192c03-5aec-4507-a702-56bf98c96e9c\" (UID: \"fc192c03-5aec-4507-a702-56bf98c96e9c\") " Mar 13 12:59:13.066990 master-0 kubenswrapper[28149]: I0313 12:59:13.066897 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "fc192c03-5aec-4507-a702-56bf98c96e9c" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:13.067336 master-0 kubenswrapper[28149]: I0313 12:59:13.067154 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "fc192c03-5aec-4507-a702-56bf98c96e9c" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:13.067645 master-0 kubenswrapper[28149]: I0313 12:59:13.067617 28149 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:13.067721 master-0 kubenswrapper[28149]: I0313 12:59:13.067648 28149 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fc192c03-5aec-4507-a702-56bf98c96e9c-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:13.068059 master-0 kubenswrapper[28149]: I0313 12:59:13.068016 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc192c03-5aec-4507-a702-56bf98c96e9c-audit-log" (OuterVolumeSpecName: "audit-log") pod "fc192c03-5aec-4507-a702-56bf98c96e9c" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:59:13.068376 master-0 kubenswrapper[28149]: I0313 12:59:13.068339 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc192c03-5aec-4507-a702-56bf98c96e9c-kube-api-access-c69h2" (OuterVolumeSpecName: "kube-api-access-c69h2") pod "fc192c03-5aec-4507-a702-56bf98c96e9c" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c"). InnerVolumeSpecName "kube-api-access-c69h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:59:13.068971 master-0 kubenswrapper[28149]: I0313 12:59:13.068940 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "fc192c03-5aec-4507-a702-56bf98c96e9c" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:13.069086 master-0 kubenswrapper[28149]: I0313 12:59:13.069052 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "fc192c03-5aec-4507-a702-56bf98c96e9c" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:13.070614 master-0 kubenswrapper[28149]: I0313 12:59:13.070561 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "fc192c03-5aec-4507-a702-56bf98c96e9c" (UID: "fc192c03-5aec-4507-a702-56bf98c96e9c"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:13.169243 master-0 kubenswrapper[28149]: I0313 12:59:13.169079 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c69h2\" (UniqueName: \"kubernetes.io/projected/fc192c03-5aec-4507-a702-56bf98c96e9c-kube-api-access-c69h2\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:13.169243 master-0 kubenswrapper[28149]: I0313 12:59:13.169126 28149 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:13.169243 master-0 kubenswrapper[28149]: I0313 12:59:13.169158 28149 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:13.169243 master-0 kubenswrapper[28149]: I0313 12:59:13.169174 28149 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/fc192c03-5aec-4507-a702-56bf98c96e9c-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:13.169243 master-0 kubenswrapper[28149]: I0313 12:59:13.169188 28149 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc192c03-5aec-4507-a702-56bf98c96e9c-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:13.348932 master-0 kubenswrapper[28149]: I0313 12:59:13.348873 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" Mar 13 12:59:13.349230 master-0 kubenswrapper[28149]: I0313 12:59:13.348854 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-567b9cf7f-cxnj2" event={"ID":"fc192c03-5aec-4507-a702-56bf98c96e9c","Type":"ContainerDied","Data":"ca4392c691682c0095dfe8e779e3de1082f741c49a5ae52776e0a4782a168b3b"} Mar 13 12:59:13.349230 master-0 kubenswrapper[28149]: I0313 12:59:13.349034 28149 scope.go:117] "RemoveContainer" containerID="6446a8dda38eb9740b431e3cbbce0e66637311ae9d8e6bde203aefb67d8183fd" Mar 13 12:59:13.356472 master-0 kubenswrapper[28149]: I0313 12:59:13.356419 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e","Type":"ContainerStarted","Data":"51655730def0ad8a4f4e24f994b795e4745138121a9d0bdc083f2bd670d283ab"} Mar 13 12:59:13.356558 master-0 kubenswrapper[28149]: I0313 12:59:13.356479 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"90f7e4ab-cd47-4a66-95f1-f6fa11ea282e","Type":"ContainerStarted","Data":"8da6747c45bc48bbbe921b8555e6e5b21be4d039e755a509c9bdf80f32cbd256"} Mar 13 12:59:13.416696 master-0 kubenswrapper[28149]: I0313 12:59:13.416586 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=11.416562373 podStartE2EDuration="11.416562373s" podCreationTimestamp="2026-03-13 12:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:59:13.18284783 +0000 UTC m=+326.836313029" watchObservedRunningTime="2026-03-13 12:59:13.416562373 +0000 UTC m=+327.070027532" Mar 13 12:59:13.925334 master-0 kubenswrapper[28149]: I0313 12:59:13.901296 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj"] Mar 13 12:59:14.365290 master-0 kubenswrapper[28149]: I0313 12:59:14.365233 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" event={"ID":"03065d4c-e103-4f7a-832f-2dc8d6be901d","Type":"ContainerStarted","Data":"3c5e0701208a66bf182bab956c406e3b4129088fbb6f874a6c21a52e9ac3b469"} Mar 13 12:59:14.365756 master-0 kubenswrapper[28149]: I0313 12:59:14.365586 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" event={"ID":"03065d4c-e103-4f7a-832f-2dc8d6be901d","Type":"ContainerStarted","Data":"1e38fb6732774bf85a1480d57001b62620598f672df5fd036dcc3baaf0215221"} Mar 13 12:59:15.139740 master-0 kubenswrapper[28149]: I0313 12:59:15.139684 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:59:15.145229 master-0 kubenswrapper[28149]: I0313 12:59:15.145192 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 12:59:15.373153 master-0 kubenswrapper[28149]: I0313 12:59:15.373068 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:16.373153 master-0 kubenswrapper[28149]: I0313 12:59:16.373087 28149 patch_prober.go:28] interesting pod/route-controller-manager-5d954c56fb-44hqj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.107:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:59:16.373386 master-0 kubenswrapper[28149]: I0313 12:59:16.373196 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" podUID="03065d4c-e103-4f7a-832f-2dc8d6be901d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.107:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:59:16.402774 master-0 kubenswrapper[28149]: I0313 12:59:16.402716 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-567b9cf7f-cxnj2"] Mar 13 12:59:16.420006 master-0 kubenswrapper[28149]: I0313 12:59:16.419954 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" Mar 13 12:59:16.420790 master-0 kubenswrapper[28149]: I0313 12:59:16.420730 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-567b9cf7f-cxnj2"] Mar 13 12:59:16.440429 master-0 kubenswrapper[28149]: I0313 12:59:16.440349 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d954c56fb-44hqj" podStartSLOduration=51.440328865 podStartE2EDuration="51.440328865s" podCreationTimestamp="2026-03-13 12:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:59:16.437062618 +0000 UTC m=+330.090527777" watchObservedRunningTime="2026-03-13 12:59:16.440328865 +0000 UTC m=+330.093794024" Mar 13 12:59:16.557654 master-0 kubenswrapper[28149]: I0313 12:59:16.557594 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=10.557578117 podStartE2EDuration="10.557578117s" podCreationTimestamp="2026-03-13 12:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:59:16.551076023 +0000 UTC m=+330.204541182" watchObservedRunningTime="2026-03-13 12:59:16.557578117 +0000 UTC m=+330.211043276" Mar 13 12:59:16.591570 master-0 kubenswrapper[28149]: I0313 12:59:16.591492 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f74cdccbf-t88kk"] Mar 13 12:59:16.676185 master-0 kubenswrapper[28149]: I0313 12:59:16.676038 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:59:16.694150 master-0 kubenswrapper[28149]: I0313 12:59:16.694092 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc192c03-5aec-4507-a702-56bf98c96e9c" path="/var/lib/kubelet/pods/fc192c03-5aec-4507-a702-56bf98c96e9c/volumes" Mar 13 12:59:18.584813 master-0 kubenswrapper[28149]: I0313 12:59:18.584747 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-bc56f484d-2sbtm" podUID="14305d97-7b24-4321-bf3a-3ec79e52f6ea" containerName="console" containerID="cri-o://c15743de492ca8ddc6fa3ae187515d8740ee624870648540fd9ae0e9eebfea32" gracePeriod=15 Mar 13 12:59:19.132320 master-0 kubenswrapper[28149]: I0313 12:59:19.132274 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-bc56f484d-2sbtm_14305d97-7b24-4321-bf3a-3ec79e52f6ea/console/0.log" Mar 13 12:59:19.132526 master-0 kubenswrapper[28149]: I0313 12:59:19.132373 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:59:19.200455 master-0 kubenswrapper[28149]: I0313 12:59:19.198933 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-config\") pod \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " Mar 13 12:59:19.200455 master-0 kubenswrapper[28149]: I0313 12:59:19.199026 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-trusted-ca-bundle\") pod \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " Mar 13 12:59:19.200455 master-0 kubenswrapper[28149]: I0313 12:59:19.199103 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sscj\" (UniqueName: \"kubernetes.io/projected/14305d97-7b24-4321-bf3a-3ec79e52f6ea-kube-api-access-8sscj\") pod \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " Mar 13 12:59:19.200455 master-0 kubenswrapper[28149]: I0313 12:59:19.199173 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-oauth-config\") pod \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " Mar 13 12:59:19.200455 master-0 kubenswrapper[28149]: I0313 12:59:19.199236 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-serving-cert\") pod \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " Mar 13 12:59:19.200455 master-0 kubenswrapper[28149]: I0313 12:59:19.199296 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-oauth-serving-cert\") pod \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " Mar 13 12:59:19.200455 master-0 kubenswrapper[28149]: I0313 12:59:19.199386 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-service-ca\") pod \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\" (UID: \"14305d97-7b24-4321-bf3a-3ec79e52f6ea\") " Mar 13 12:59:19.200455 master-0 kubenswrapper[28149]: I0313 12:59:19.200126 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "14305d97-7b24-4321-bf3a-3ec79e52f6ea" (UID: "14305d97-7b24-4321-bf3a-3ec79e52f6ea"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:19.200455 master-0 kubenswrapper[28149]: I0313 12:59:19.200225 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-service-ca" (OuterVolumeSpecName: "service-ca") pod "14305d97-7b24-4321-bf3a-3ec79e52f6ea" (UID: "14305d97-7b24-4321-bf3a-3ec79e52f6ea"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:19.201704 master-0 kubenswrapper[28149]: I0313 12:59:19.200705 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-config" (OuterVolumeSpecName: "console-config") pod "14305d97-7b24-4321-bf3a-3ec79e52f6ea" (UID: "14305d97-7b24-4321-bf3a-3ec79e52f6ea"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:19.201704 master-0 kubenswrapper[28149]: I0313 12:59:19.201108 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "14305d97-7b24-4321-bf3a-3ec79e52f6ea" (UID: "14305d97-7b24-4321-bf3a-3ec79e52f6ea"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:19.201975 master-0 kubenswrapper[28149]: I0313 12:59:19.201932 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "14305d97-7b24-4321-bf3a-3ec79e52f6ea" (UID: "14305d97-7b24-4321-bf3a-3ec79e52f6ea"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:19.203297 master-0 kubenswrapper[28149]: I0313 12:59:19.203212 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14305d97-7b24-4321-bf3a-3ec79e52f6ea-kube-api-access-8sscj" (OuterVolumeSpecName: "kube-api-access-8sscj") pod "14305d97-7b24-4321-bf3a-3ec79e52f6ea" (UID: "14305d97-7b24-4321-bf3a-3ec79e52f6ea"). InnerVolumeSpecName "kube-api-access-8sscj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:59:19.204830 master-0 kubenswrapper[28149]: I0313 12:59:19.204777 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "14305d97-7b24-4321-bf3a-3ec79e52f6ea" (UID: "14305d97-7b24-4321-bf3a-3ec79e52f6ea"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:19.300832 master-0 kubenswrapper[28149]: I0313 12:59:19.300782 28149 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:19.300832 master-0 kubenswrapper[28149]: I0313 12:59:19.300824 28149 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:19.300832 master-0 kubenswrapper[28149]: I0313 12:59:19.300837 28149 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:19.300832 master-0 kubenswrapper[28149]: I0313 12:59:19.300845 28149 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14305d97-7b24-4321-bf3a-3ec79e52f6ea-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:19.300832 master-0 kubenswrapper[28149]: I0313 12:59:19.300860 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sscj\" (UniqueName: \"kubernetes.io/projected/14305d97-7b24-4321-bf3a-3ec79e52f6ea-kube-api-access-8sscj\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:19.301342 master-0 kubenswrapper[28149]: I0313 12:59:19.300874 28149 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:19.301342 master-0 kubenswrapper[28149]: I0313 12:59:19.300882 28149 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/14305d97-7b24-4321-bf3a-3ec79e52f6ea-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:19.401716 master-0 kubenswrapper[28149]: I0313 12:59:19.401559 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-bc56f484d-2sbtm_14305d97-7b24-4321-bf3a-3ec79e52f6ea/console/0.log" Mar 13 12:59:19.401716 master-0 kubenswrapper[28149]: I0313 12:59:19.401620 28149 generic.go:334] "Generic (PLEG): container finished" podID="14305d97-7b24-4321-bf3a-3ec79e52f6ea" containerID="c15743de492ca8ddc6fa3ae187515d8740ee624870648540fd9ae0e9eebfea32" exitCode=2 Mar 13 12:59:19.401716 master-0 kubenswrapper[28149]: I0313 12:59:19.401656 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bc56f484d-2sbtm" event={"ID":"14305d97-7b24-4321-bf3a-3ec79e52f6ea","Type":"ContainerDied","Data":"c15743de492ca8ddc6fa3ae187515d8740ee624870648540fd9ae0e9eebfea32"} Mar 13 12:59:19.401716 master-0 kubenswrapper[28149]: I0313 12:59:19.401691 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bc56f484d-2sbtm" event={"ID":"14305d97-7b24-4321-bf3a-3ec79e52f6ea","Type":"ContainerDied","Data":"d8b671961d0f43b9ff143b56b51b8894a873e7f964dcf6259604ae93bbc70a93"} Mar 13 12:59:19.402233 master-0 kubenswrapper[28149]: I0313 12:59:19.401728 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bc56f484d-2sbtm" Mar 13 12:59:19.402233 master-0 kubenswrapper[28149]: I0313 12:59:19.401771 28149 scope.go:117] "RemoveContainer" containerID="c15743de492ca8ddc6fa3ae187515d8740ee624870648540fd9ae0e9eebfea32" Mar 13 12:59:19.429568 master-0 kubenswrapper[28149]: I0313 12:59:19.429530 28149 scope.go:117] "RemoveContainer" containerID="c15743de492ca8ddc6fa3ae187515d8740ee624870648540fd9ae0e9eebfea32" Mar 13 12:59:19.430435 master-0 kubenswrapper[28149]: E0313 12:59:19.430404 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c15743de492ca8ddc6fa3ae187515d8740ee624870648540fd9ae0e9eebfea32\": container with ID starting with c15743de492ca8ddc6fa3ae187515d8740ee624870648540fd9ae0e9eebfea32 not found: ID does not exist" containerID="c15743de492ca8ddc6fa3ae187515d8740ee624870648540fd9ae0e9eebfea32" Mar 13 12:59:19.430534 master-0 kubenswrapper[28149]: I0313 12:59:19.430446 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c15743de492ca8ddc6fa3ae187515d8740ee624870648540fd9ae0e9eebfea32"} err="failed to get container status \"c15743de492ca8ddc6fa3ae187515d8740ee624870648540fd9ae0e9eebfea32\": rpc error: code = NotFound desc = could not find container \"c15743de492ca8ddc6fa3ae187515d8740ee624870648540fd9ae0e9eebfea32\": container with ID starting with c15743de492ca8ddc6fa3ae187515d8740ee624870648540fd9ae0e9eebfea32 not found: ID does not exist" Mar 13 12:59:19.456831 master-0 kubenswrapper[28149]: I0313 12:59:19.456760 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-bc56f484d-2sbtm"] Mar 13 12:59:19.462041 master-0 kubenswrapper[28149]: I0313 12:59:19.461991 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-bc56f484d-2sbtm"] Mar 13 12:59:20.695616 master-0 kubenswrapper[28149]: I0313 12:59:20.695554 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14305d97-7b24-4321-bf3a-3ec79e52f6ea" path="/var/lib/kubelet/pods/14305d97-7b24-4321-bf3a-3ec79e52f6ea/volumes" Mar 13 12:59:38.100185 master-0 kubenswrapper[28149]: I0313 12:59:38.100084 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 13 12:59:38.100997 master-0 kubenswrapper[28149]: E0313 12:59:38.100473 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14305d97-7b24-4321-bf3a-3ec79e52f6ea" containerName="console" Mar 13 12:59:38.100997 master-0 kubenswrapper[28149]: I0313 12:59:38.100499 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="14305d97-7b24-4321-bf3a-3ec79e52f6ea" containerName="console" Mar 13 12:59:38.100997 master-0 kubenswrapper[28149]: E0313 12:59:38.100546 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc192c03-5aec-4507-a702-56bf98c96e9c" containerName="metrics-server" Mar 13 12:59:38.100997 master-0 kubenswrapper[28149]: I0313 12:59:38.100555 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc192c03-5aec-4507-a702-56bf98c96e9c" containerName="metrics-server" Mar 13 12:59:38.100997 master-0 kubenswrapper[28149]: I0313 12:59:38.100755 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc192c03-5aec-4507-a702-56bf98c96e9c" containerName="metrics-server" Mar 13 12:59:38.100997 master-0 kubenswrapper[28149]: I0313 12:59:38.100792 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="14305d97-7b24-4321-bf3a-3ec79e52f6ea" containerName="console" Mar 13 12:59:38.106252 master-0 kubenswrapper[28149]: I0313 12:59:38.106192 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:59:38.111393 master-0 kubenswrapper[28149]: I0313 12:59:38.111343 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-7mx4m" Mar 13 12:59:38.112536 master-0 kubenswrapper[28149]: I0313 12:59:38.112501 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 12:59:38.112883 master-0 kubenswrapper[28149]: I0313 12:59:38.112829 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 13 12:59:38.234188 master-0 kubenswrapper[28149]: I0313 12:59:38.234101 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22335d40-8638-4d1a-81eb-821362a6ae89-var-lock\") pod \"installer-4-master-0\" (UID: \"22335d40-8638-4d1a-81eb-821362a6ae89\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:59:38.234569 master-0 kubenswrapper[28149]: I0313 12:59:38.234528 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22335d40-8638-4d1a-81eb-821362a6ae89-kube-api-access\") pod \"installer-4-master-0\" (UID: \"22335d40-8638-4d1a-81eb-821362a6ae89\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:59:38.234631 master-0 kubenswrapper[28149]: I0313 12:59:38.234608 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22335d40-8638-4d1a-81eb-821362a6ae89-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"22335d40-8638-4d1a-81eb-821362a6ae89\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:59:38.335529 master-0 kubenswrapper[28149]: I0313 12:59:38.335453 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22335d40-8638-4d1a-81eb-821362a6ae89-kube-api-access\") pod \"installer-4-master-0\" (UID: \"22335d40-8638-4d1a-81eb-821362a6ae89\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:59:38.335529 master-0 kubenswrapper[28149]: I0313 12:59:38.335543 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22335d40-8638-4d1a-81eb-821362a6ae89-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"22335d40-8638-4d1a-81eb-821362a6ae89\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:59:38.335842 master-0 kubenswrapper[28149]: I0313 12:59:38.335609 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22335d40-8638-4d1a-81eb-821362a6ae89-var-lock\") pod \"installer-4-master-0\" (UID: \"22335d40-8638-4d1a-81eb-821362a6ae89\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:59:38.335842 master-0 kubenswrapper[28149]: I0313 12:59:38.335713 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22335d40-8638-4d1a-81eb-821362a6ae89-var-lock\") pod \"installer-4-master-0\" (UID: \"22335d40-8638-4d1a-81eb-821362a6ae89\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:59:38.336348 master-0 kubenswrapper[28149]: I0313 12:59:38.336298 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22335d40-8638-4d1a-81eb-821362a6ae89-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"22335d40-8638-4d1a-81eb-821362a6ae89\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:59:38.350706 master-0 kubenswrapper[28149]: I0313 12:59:38.350588 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22335d40-8638-4d1a-81eb-821362a6ae89-kube-api-access\") pod \"installer-4-master-0\" (UID: \"22335d40-8638-4d1a-81eb-821362a6ae89\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:59:38.434592 master-0 kubenswrapper[28149]: I0313 12:59:38.434525 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:59:38.908384 master-0 kubenswrapper[28149]: I0313 12:59:38.908331 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 13 12:59:39.611261 master-0 kubenswrapper[28149]: I0313 12:59:39.610328 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"22335d40-8638-4d1a-81eb-821362a6ae89","Type":"ContainerStarted","Data":"affc812f4ac3dc1580628d526dec1ae0779e12f50055f9303ec9547d783c16d3"} Mar 13 12:59:39.611261 master-0 kubenswrapper[28149]: I0313 12:59:39.610393 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"22335d40-8638-4d1a-81eb-821362a6ae89","Type":"ContainerStarted","Data":"d6c478d9992e0f8e96fa658933b033fc6821b33b7997a5ce7f69500ad9635a5d"} Mar 13 12:59:39.631826 master-0 kubenswrapper[28149]: I0313 12:59:39.631767 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=1.63174053 podStartE2EDuration="1.63174053s" podCreationTimestamp="2026-03-13 12:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:59:39.628165694 +0000 UTC m=+353.281630863" watchObservedRunningTime="2026-03-13 12:59:39.63174053 +0000 UTC m=+353.285205689" Mar 13 12:59:41.638597 master-0 kubenswrapper[28149]: I0313 12:59:41.638484 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f74cdccbf-t88kk" podUID="17c9d2eb-bc27-40f5-85b1-171256776322" containerName="console" containerID="cri-o://a365c2570eec0d1efd2f95df921ea91a62e8a7bec82ee722c2c420e4e4f9a961" gracePeriod=15 Mar 13 12:59:42.633968 master-0 kubenswrapper[28149]: I0313 12:59:42.633896 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f74cdccbf-t88kk_17c9d2eb-bc27-40f5-85b1-171256776322/console/0.log" Mar 13 12:59:42.634237 master-0 kubenswrapper[28149]: I0313 12:59:42.633997 28149 generic.go:334] "Generic (PLEG): container finished" podID="17c9d2eb-bc27-40f5-85b1-171256776322" containerID="a365c2570eec0d1efd2f95df921ea91a62e8a7bec82ee722c2c420e4e4f9a961" exitCode=2 Mar 13 12:59:42.634237 master-0 kubenswrapper[28149]: I0313 12:59:42.634034 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f74cdccbf-t88kk" event={"ID":"17c9d2eb-bc27-40f5-85b1-171256776322","Type":"ContainerDied","Data":"a365c2570eec0d1efd2f95df921ea91a62e8a7bec82ee722c2c420e4e4f9a961"} Mar 13 12:59:42.696085 master-0 kubenswrapper[28149]: I0313 12:59:42.696038 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f74cdccbf-t88kk_17c9d2eb-bc27-40f5-85b1-171256776322/console/0.log" Mar 13 12:59:42.698298 master-0 kubenswrapper[28149]: I0313 12:59:42.696129 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:59:42.815380 master-0 kubenswrapper[28149]: I0313 12:59:42.815297 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-service-ca\") pod \"17c9d2eb-bc27-40f5-85b1-171256776322\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " Mar 13 12:59:42.815622 master-0 kubenswrapper[28149]: I0313 12:59:42.815403 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-console-config\") pod \"17c9d2eb-bc27-40f5-85b1-171256776322\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " Mar 13 12:59:42.815622 master-0 kubenswrapper[28149]: I0313 12:59:42.815444 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-trusted-ca-bundle\") pod \"17c9d2eb-bc27-40f5-85b1-171256776322\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " Mar 13 12:59:42.815622 master-0 kubenswrapper[28149]: I0313 12:59:42.815481 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17c9d2eb-bc27-40f5-85b1-171256776322-console-oauth-config\") pod \"17c9d2eb-bc27-40f5-85b1-171256776322\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " Mar 13 12:59:42.815622 master-0 kubenswrapper[28149]: I0313 12:59:42.815545 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wdl6\" (UniqueName: \"kubernetes.io/projected/17c9d2eb-bc27-40f5-85b1-171256776322-kube-api-access-5wdl6\") pod \"17c9d2eb-bc27-40f5-85b1-171256776322\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " Mar 13 12:59:42.815622 master-0 kubenswrapper[28149]: I0313 12:59:42.815571 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-oauth-serving-cert\") pod \"17c9d2eb-bc27-40f5-85b1-171256776322\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " Mar 13 12:59:42.815622 master-0 kubenswrapper[28149]: I0313 12:59:42.815586 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17c9d2eb-bc27-40f5-85b1-171256776322-console-serving-cert\") pod \"17c9d2eb-bc27-40f5-85b1-171256776322\" (UID: \"17c9d2eb-bc27-40f5-85b1-171256776322\") " Mar 13 12:59:42.816426 master-0 kubenswrapper[28149]: I0313 12:59:42.816363 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-console-config" (OuterVolumeSpecName: "console-config") pod "17c9d2eb-bc27-40f5-85b1-171256776322" (UID: "17c9d2eb-bc27-40f5-85b1-171256776322"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:42.816507 master-0 kubenswrapper[28149]: I0313 12:59:42.816453 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "17c9d2eb-bc27-40f5-85b1-171256776322" (UID: "17c9d2eb-bc27-40f5-85b1-171256776322"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:42.816507 master-0 kubenswrapper[28149]: I0313 12:59:42.816476 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "17c9d2eb-bc27-40f5-85b1-171256776322" (UID: "17c9d2eb-bc27-40f5-85b1-171256776322"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:42.817193 master-0 kubenswrapper[28149]: I0313 12:59:42.817152 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-service-ca" (OuterVolumeSpecName: "service-ca") pod "17c9d2eb-bc27-40f5-85b1-171256776322" (UID: "17c9d2eb-bc27-40f5-85b1-171256776322"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:59:42.818781 master-0 kubenswrapper[28149]: I0313 12:59:42.818736 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c9d2eb-bc27-40f5-85b1-171256776322-kube-api-access-5wdl6" (OuterVolumeSpecName: "kube-api-access-5wdl6") pod "17c9d2eb-bc27-40f5-85b1-171256776322" (UID: "17c9d2eb-bc27-40f5-85b1-171256776322"). InnerVolumeSpecName "kube-api-access-5wdl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:59:42.819299 master-0 kubenswrapper[28149]: I0313 12:59:42.819176 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c9d2eb-bc27-40f5-85b1-171256776322-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "17c9d2eb-bc27-40f5-85b1-171256776322" (UID: "17c9d2eb-bc27-40f5-85b1-171256776322"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:42.833543 master-0 kubenswrapper[28149]: I0313 12:59:42.833493 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c9d2eb-bc27-40f5-85b1-171256776322-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "17c9d2eb-bc27-40f5-85b1-171256776322" (UID: "17c9d2eb-bc27-40f5-85b1-171256776322"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:59:42.917799 master-0 kubenswrapper[28149]: I0313 12:59:42.917686 28149 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:42.917799 master-0 kubenswrapper[28149]: I0313 12:59:42.917727 28149 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:42.917799 master-0 kubenswrapper[28149]: I0313 12:59:42.917738 28149 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:42.917799 master-0 kubenswrapper[28149]: I0313 12:59:42.917749 28149 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17c9d2eb-bc27-40f5-85b1-171256776322-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:42.917799 master-0 kubenswrapper[28149]: I0313 12:59:42.917758 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wdl6\" (UniqueName: \"kubernetes.io/projected/17c9d2eb-bc27-40f5-85b1-171256776322-kube-api-access-5wdl6\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:42.917799 master-0 kubenswrapper[28149]: I0313 12:59:42.917766 28149 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17c9d2eb-bc27-40f5-85b1-171256776322-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:42.917799 master-0 kubenswrapper[28149]: I0313 12:59:42.917775 28149 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17c9d2eb-bc27-40f5-85b1-171256776322-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:43.621965 master-0 kubenswrapper[28149]: I0313 12:59:43.621864 28149 patch_prober.go:28] interesting pod/console-f74cdccbf-t88kk container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.128.0.97:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:59:43.622349 master-0 kubenswrapper[28149]: I0313 12:59:43.622018 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f74cdccbf-t88kk" podUID="17c9d2eb-bc27-40f5-85b1-171256776322" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:59:43.646594 master-0 kubenswrapper[28149]: I0313 12:59:43.646544 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f74cdccbf-t88kk_17c9d2eb-bc27-40f5-85b1-171256776322/console/0.log" Mar 13 12:59:43.646829 master-0 kubenswrapper[28149]: I0313 12:59:43.646640 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f74cdccbf-t88kk" event={"ID":"17c9d2eb-bc27-40f5-85b1-171256776322","Type":"ContainerDied","Data":"d8fb8b02eb6d9db0a21e4330a1e01bee3ea983e054f7f8f14f4cd2246785f26a"} Mar 13 12:59:43.646829 master-0 kubenswrapper[28149]: I0313 12:59:43.646754 28149 scope.go:117] "RemoveContainer" containerID="a365c2570eec0d1efd2f95df921ea91a62e8a7bec82ee722c2c420e4e4f9a961" Mar 13 12:59:43.646829 master-0 kubenswrapper[28149]: I0313 12:59:43.646773 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f74cdccbf-t88kk" Mar 13 12:59:43.698401 master-0 kubenswrapper[28149]: I0313 12:59:43.698305 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f74cdccbf-t88kk"] Mar 13 12:59:43.707629 master-0 kubenswrapper[28149]: I0313 12:59:43.707553 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f74cdccbf-t88kk"] Mar 13 12:59:44.695506 master-0 kubenswrapper[28149]: I0313 12:59:44.695453 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17c9d2eb-bc27-40f5-85b1-171256776322" path="/var/lib/kubelet/pods/17c9d2eb-bc27-40f5-85b1-171256776322/volumes" Mar 13 13:00:06.676004 master-0 kubenswrapper[28149]: I0313 13:00:06.675852 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 13:00:06.717659 master-0 kubenswrapper[28149]: I0313 13:00:06.717584 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 13:00:06.892493 master-0 kubenswrapper[28149]: I0313 13:00:06.892432 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 13:00:12.188653 master-0 kubenswrapper[28149]: I0313 13:00:12.188598 28149 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 13:00:12.190013 master-0 kubenswrapper[28149]: I0313 13:00:12.189976 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="cluster-policy-controller" containerID="cri-o://ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c" gracePeriod=30 Mar 13 13:00:12.190220 master-0 kubenswrapper[28149]: I0313 13:00:12.190113 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451" gracePeriod=30 Mar 13 13:00:12.190413 master-0 kubenswrapper[28149]: I0313 13:00:12.190241 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f" gracePeriod=30 Mar 13 13:00:12.190413 master-0 kubenswrapper[28149]: I0313 13:00:12.190128 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager" containerID="cri-o://cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4" gracePeriod=30 Mar 13 13:00:12.192250 master-0 kubenswrapper[28149]: I0313 13:00:12.192227 28149 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 13:00:12.192708 master-0 kubenswrapper[28149]: E0313 13:00:12.192688 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager" Mar 13 13:00:12.192898 master-0 kubenswrapper[28149]: I0313 13:00:12.192882 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager" Mar 13 13:00:12.192993 master-0 kubenswrapper[28149]: E0313 13:00:12.192979 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c9d2eb-bc27-40f5-85b1-171256776322" containerName="console" Mar 13 13:00:12.193111 master-0 kubenswrapper[28149]: I0313 13:00:12.193098 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c9d2eb-bc27-40f5-85b1-171256776322" containerName="console" Mar 13 13:00:12.193262 master-0 kubenswrapper[28149]: E0313 13:00:12.193247 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager-cert-syncer" Mar 13 13:00:12.193345 master-0 kubenswrapper[28149]: I0313 13:00:12.193333 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager-cert-syncer" Mar 13 13:00:12.193448 master-0 kubenswrapper[28149]: E0313 13:00:12.193435 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="cluster-policy-controller" Mar 13 13:00:12.193529 master-0 kubenswrapper[28149]: I0313 13:00:12.193514 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="cluster-policy-controller" Mar 13 13:00:12.193610 master-0 kubenswrapper[28149]: E0313 13:00:12.193597 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager-recovery-controller" Mar 13 13:00:12.193704 master-0 kubenswrapper[28149]: I0313 13:00:12.193691 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager-recovery-controller" Mar 13 13:00:12.193969 master-0 kubenswrapper[28149]: I0313 13:00:12.193951 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager" Mar 13 13:00:12.194125 master-0 kubenswrapper[28149]: I0313 13:00:12.194058 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager-recovery-controller" Mar 13 13:00:12.194235 master-0 kubenswrapper[28149]: I0313 13:00:12.194221 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="cluster-policy-controller" Mar 13 13:00:12.194327 master-0 kubenswrapper[28149]: I0313 13:00:12.194315 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c9d2eb-bc27-40f5-85b1-171256776322" containerName="console" Mar 13 13:00:12.194419 master-0 kubenswrapper[28149]: I0313 13:00:12.194406 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager-cert-syncer" Mar 13 13:00:12.194775 master-0 kubenswrapper[28149]: E0313 13:00:12.194758 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager" Mar 13 13:00:12.194864 master-0 kubenswrapper[28149]: I0313 13:00:12.194851 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager" Mar 13 13:00:12.195104 master-0 kubenswrapper[28149]: I0313 13:00:12.195088 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b24fda1c2e55a08607764d7b9b24355" containerName="kube-controller-manager" Mar 13 13:00:12.338191 master-0 kubenswrapper[28149]: I0313 13:00:12.338110 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f5bb76e91dda25eb09e825c314a1cb06-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f5bb76e91dda25eb09e825c314a1cb06\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:12.338301 master-0 kubenswrapper[28149]: I0313 13:00:12.338232 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f5bb76e91dda25eb09e825c314a1cb06-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f5bb76e91dda25eb09e825c314a1cb06\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:12.440292 master-0 kubenswrapper[28149]: I0313 13:00:12.440158 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f5bb76e91dda25eb09e825c314a1cb06-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f5bb76e91dda25eb09e825c314a1cb06\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:12.440482 master-0 kubenswrapper[28149]: I0313 13:00:12.440433 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f5bb76e91dda25eb09e825c314a1cb06-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f5bb76e91dda25eb09e825c314a1cb06\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:12.440594 master-0 kubenswrapper[28149]: I0313 13:00:12.440536 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f5bb76e91dda25eb09e825c314a1cb06-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f5bb76e91dda25eb09e825c314a1cb06\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:12.440665 master-0 kubenswrapper[28149]: I0313 13:00:12.440423 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f5bb76e91dda25eb09e825c314a1cb06-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f5bb76e91dda25eb09e825c314a1cb06\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:12.481299 master-0 kubenswrapper[28149]: I0313 13:00:12.481237 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_9b24fda1c2e55a08607764d7b9b24355/kube-controller-manager-cert-syncer/0.log" Mar 13 13:00:12.482433 master-0 kubenswrapper[28149]: I0313 13:00:12.482401 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_9b24fda1c2e55a08607764d7b9b24355/kube-controller-manager/0.log" Mar 13 13:00:12.482532 master-0 kubenswrapper[28149]: I0313 13:00:12.482481 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:12.485784 master-0 kubenswrapper[28149]: I0313 13:00:12.485716 28149 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="9b24fda1c2e55a08607764d7b9b24355" podUID="f5bb76e91dda25eb09e825c314a1cb06" Mar 13 13:00:12.541775 master-0 kubenswrapper[28149]: I0313 13:00:12.541706 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-resource-dir\") pod \"9b24fda1c2e55a08607764d7b9b24355\" (UID: \"9b24fda1c2e55a08607764d7b9b24355\") " Mar 13 13:00:12.542013 master-0 kubenswrapper[28149]: I0313 13:00:12.541807 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-cert-dir\") pod \"9b24fda1c2e55a08607764d7b9b24355\" (UID: \"9b24fda1c2e55a08607764d7b9b24355\") " Mar 13 13:00:12.542013 master-0 kubenswrapper[28149]: I0313 13:00:12.541863 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "9b24fda1c2e55a08607764d7b9b24355" (UID: "9b24fda1c2e55a08607764d7b9b24355"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:00:12.542013 master-0 kubenswrapper[28149]: I0313 13:00:12.541923 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "9b24fda1c2e55a08607764d7b9b24355" (UID: "9b24fda1c2e55a08607764d7b9b24355"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:00:12.542355 master-0 kubenswrapper[28149]: I0313 13:00:12.542324 28149 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 13:00:12.542355 master-0 kubenswrapper[28149]: I0313 13:00:12.542348 28149 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9b24fda1c2e55a08607764d7b9b24355-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 13:00:12.698123 master-0 kubenswrapper[28149]: I0313 13:00:12.698024 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b24fda1c2e55a08607764d7b9b24355" path="/var/lib/kubelet/pods/9b24fda1c2e55a08607764d7b9b24355/volumes" Mar 13 13:00:12.905819 master-0 kubenswrapper[28149]: I0313 13:00:12.905786 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_9b24fda1c2e55a08607764d7b9b24355/kube-controller-manager-cert-syncer/0.log" Mar 13 13:00:12.907056 master-0 kubenswrapper[28149]: I0313 13:00:12.907038 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_9b24fda1c2e55a08607764d7b9b24355/kube-controller-manager/0.log" Mar 13 13:00:12.907235 master-0 kubenswrapper[28149]: I0313 13:00:12.907209 28149 generic.go:334] "Generic (PLEG): container finished" podID="9b24fda1c2e55a08607764d7b9b24355" containerID="cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4" exitCode=0 Mar 13 13:00:12.907336 master-0 kubenswrapper[28149]: I0313 13:00:12.907323 28149 generic.go:334] "Generic (PLEG): container finished" podID="9b24fda1c2e55a08607764d7b9b24355" containerID="552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451" exitCode=0 Mar 13 13:00:12.907410 master-0 kubenswrapper[28149]: I0313 13:00:12.907399 28149 generic.go:334] "Generic (PLEG): container finished" podID="9b24fda1c2e55a08607764d7b9b24355" containerID="5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f" exitCode=2 Mar 13 13:00:12.907494 master-0 kubenswrapper[28149]: I0313 13:00:12.907480 28149 generic.go:334] "Generic (PLEG): container finished" podID="9b24fda1c2e55a08607764d7b9b24355" containerID="ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c" exitCode=0 Mar 13 13:00:12.907609 master-0 kubenswrapper[28149]: I0313 13:00:12.907597 28149 scope.go:117] "RemoveContainer" containerID="cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4" Mar 13 13:00:12.907790 master-0 kubenswrapper[28149]: I0313 13:00:12.907777 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:12.912042 master-0 kubenswrapper[28149]: I0313 13:00:12.911192 28149 generic.go:334] "Generic (PLEG): container finished" podID="22335d40-8638-4d1a-81eb-821362a6ae89" containerID="affc812f4ac3dc1580628d526dec1ae0779e12f50055f9303ec9547d783c16d3" exitCode=0 Mar 13 13:00:12.912042 master-0 kubenswrapper[28149]: I0313 13:00:12.911238 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"22335d40-8638-4d1a-81eb-821362a6ae89","Type":"ContainerDied","Data":"affc812f4ac3dc1580628d526dec1ae0779e12f50055f9303ec9547d783c16d3"} Mar 13 13:00:12.912042 master-0 kubenswrapper[28149]: I0313 13:00:12.911939 28149 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="9b24fda1c2e55a08607764d7b9b24355" podUID="f5bb76e91dda25eb09e825c314a1cb06" Mar 13 13:00:12.938581 master-0 kubenswrapper[28149]: I0313 13:00:12.935523 28149 scope.go:117] "RemoveContainer" containerID="552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451" Mar 13 13:00:12.938581 master-0 kubenswrapper[28149]: I0313 13:00:12.937016 28149 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="9b24fda1c2e55a08607764d7b9b24355" podUID="f5bb76e91dda25eb09e825c314a1cb06" Mar 13 13:00:12.956917 master-0 kubenswrapper[28149]: I0313 13:00:12.956885 28149 scope.go:117] "RemoveContainer" containerID="5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f" Mar 13 13:00:12.971473 master-0 kubenswrapper[28149]: I0313 13:00:12.971453 28149 scope.go:117] "RemoveContainer" containerID="ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c" Mar 13 13:00:12.987129 master-0 kubenswrapper[28149]: I0313 13:00:12.987085 28149 scope.go:117] "RemoveContainer" containerID="9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11" Mar 13 13:00:13.005815 master-0 kubenswrapper[28149]: I0313 13:00:13.005773 28149 scope.go:117] "RemoveContainer" containerID="cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4" Mar 13 13:00:13.006321 master-0 kubenswrapper[28149]: E0313 13:00:13.006291 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4\": container with ID starting with cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4 not found: ID does not exist" containerID="cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4" Mar 13 13:00:13.006470 master-0 kubenswrapper[28149]: I0313 13:00:13.006432 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4"} err="failed to get container status \"cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4\": rpc error: code = NotFound desc = could not find container \"cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4\": container with ID starting with cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4 not found: ID does not exist" Mar 13 13:00:13.006549 master-0 kubenswrapper[28149]: I0313 13:00:13.006538 28149 scope.go:117] "RemoveContainer" containerID="552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451" Mar 13 13:00:13.007231 master-0 kubenswrapper[28149]: E0313 13:00:13.007207 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451\": container with ID starting with 552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451 not found: ID does not exist" containerID="552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451" Mar 13 13:00:13.007300 master-0 kubenswrapper[28149]: I0313 13:00:13.007237 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451"} err="failed to get container status \"552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451\": rpc error: code = NotFound desc = could not find container \"552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451\": container with ID starting with 552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451 not found: ID does not exist" Mar 13 13:00:13.007300 master-0 kubenswrapper[28149]: I0313 13:00:13.007263 28149 scope.go:117] "RemoveContainer" containerID="5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f" Mar 13 13:00:13.007517 master-0 kubenswrapper[28149]: E0313 13:00:13.007500 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f\": container with ID starting with 5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f not found: ID does not exist" containerID="5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f" Mar 13 13:00:13.007624 master-0 kubenswrapper[28149]: I0313 13:00:13.007606 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f"} err="failed to get container status \"5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f\": rpc error: code = NotFound desc = could not find container \"5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f\": container with ID starting with 5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f not found: ID does not exist" Mar 13 13:00:13.007706 master-0 kubenswrapper[28149]: I0313 13:00:13.007694 28149 scope.go:117] "RemoveContainer" containerID="ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c" Mar 13 13:00:13.008072 master-0 kubenswrapper[28149]: E0313 13:00:13.008050 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c\": container with ID starting with ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c not found: ID does not exist" containerID="ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c" Mar 13 13:00:13.008180 master-0 kubenswrapper[28149]: I0313 13:00:13.008162 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c"} err="failed to get container status \"ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c\": rpc error: code = NotFound desc = could not find container \"ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c\": container with ID starting with ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c not found: ID does not exist" Mar 13 13:00:13.008255 master-0 kubenswrapper[28149]: I0313 13:00:13.008241 28149 scope.go:117] "RemoveContainer" containerID="9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11" Mar 13 13:00:13.008517 master-0 kubenswrapper[28149]: E0313 13:00:13.008502 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11\": container with ID starting with 9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11 not found: ID does not exist" containerID="9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11" Mar 13 13:00:13.008610 master-0 kubenswrapper[28149]: I0313 13:00:13.008593 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11"} err="failed to get container status \"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11\": rpc error: code = NotFound desc = could not find container \"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11\": container with ID starting with 9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11 not found: ID does not exist" Mar 13 13:00:13.008684 master-0 kubenswrapper[28149]: I0313 13:00:13.008668 28149 scope.go:117] "RemoveContainer" containerID="cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4" Mar 13 13:00:13.008996 master-0 kubenswrapper[28149]: I0313 13:00:13.008978 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4"} err="failed to get container status \"cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4\": rpc error: code = NotFound desc = could not find container \"cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4\": container with ID starting with cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4 not found: ID does not exist" Mar 13 13:00:13.009109 master-0 kubenswrapper[28149]: I0313 13:00:13.009093 28149 scope.go:117] "RemoveContainer" containerID="552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451" Mar 13 13:00:13.009480 master-0 kubenswrapper[28149]: I0313 13:00:13.009396 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451"} err="failed to get container status \"552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451\": rpc error: code = NotFound desc = could not find container \"552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451\": container with ID starting with 552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451 not found: ID does not exist" Mar 13 13:00:13.009561 master-0 kubenswrapper[28149]: I0313 13:00:13.009549 28149 scope.go:117] "RemoveContainer" containerID="5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f" Mar 13 13:00:13.009821 master-0 kubenswrapper[28149]: I0313 13:00:13.009803 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f"} err="failed to get container status \"5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f\": rpc error: code = NotFound desc = could not find container \"5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f\": container with ID starting with 5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f not found: ID does not exist" Mar 13 13:00:13.009900 master-0 kubenswrapper[28149]: I0313 13:00:13.009889 28149 scope.go:117] "RemoveContainer" containerID="ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c" Mar 13 13:00:13.010329 master-0 kubenswrapper[28149]: I0313 13:00:13.010270 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c"} err="failed to get container status \"ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c\": rpc error: code = NotFound desc = could not find container \"ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c\": container with ID starting with ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c not found: ID does not exist" Mar 13 13:00:13.010394 master-0 kubenswrapper[28149]: I0313 13:00:13.010334 28149 scope.go:117] "RemoveContainer" containerID="9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11" Mar 13 13:00:13.010643 master-0 kubenswrapper[28149]: I0313 13:00:13.010621 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11"} err="failed to get container status \"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11\": rpc error: code = NotFound desc = could not find container \"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11\": container with ID starting with 9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11 not found: ID does not exist" Mar 13 13:00:13.010643 master-0 kubenswrapper[28149]: I0313 13:00:13.010642 28149 scope.go:117] "RemoveContainer" containerID="cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4" Mar 13 13:00:13.010901 master-0 kubenswrapper[28149]: I0313 13:00:13.010878 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4"} err="failed to get container status \"cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4\": rpc error: code = NotFound desc = could not find container \"cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4\": container with ID starting with cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4 not found: ID does not exist" Mar 13 13:00:13.010954 master-0 kubenswrapper[28149]: I0313 13:00:13.010900 28149 scope.go:117] "RemoveContainer" containerID="552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451" Mar 13 13:00:13.011289 master-0 kubenswrapper[28149]: I0313 13:00:13.011267 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451"} err="failed to get container status \"552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451\": rpc error: code = NotFound desc = could not find container \"552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451\": container with ID starting with 552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451 not found: ID does not exist" Mar 13 13:00:13.011390 master-0 kubenswrapper[28149]: I0313 13:00:13.011378 28149 scope.go:117] "RemoveContainer" containerID="5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f" Mar 13 13:00:13.011757 master-0 kubenswrapper[28149]: I0313 13:00:13.011717 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f"} err="failed to get container status \"5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f\": rpc error: code = NotFound desc = could not find container \"5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f\": container with ID starting with 5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f not found: ID does not exist" Mar 13 13:00:13.011757 master-0 kubenswrapper[28149]: I0313 13:00:13.011755 28149 scope.go:117] "RemoveContainer" containerID="ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c" Mar 13 13:00:13.012006 master-0 kubenswrapper[28149]: I0313 13:00:13.011986 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c"} err="failed to get container status \"ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c\": rpc error: code = NotFound desc = could not find container \"ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c\": container with ID starting with ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c not found: ID does not exist" Mar 13 13:00:13.012006 master-0 kubenswrapper[28149]: I0313 13:00:13.012005 28149 scope.go:117] "RemoveContainer" containerID="9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11" Mar 13 13:00:13.012261 master-0 kubenswrapper[28149]: I0313 13:00:13.012240 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11"} err="failed to get container status \"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11\": rpc error: code = NotFound desc = could not find container \"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11\": container with ID starting with 9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11 not found: ID does not exist" Mar 13 13:00:13.012311 master-0 kubenswrapper[28149]: I0313 13:00:13.012289 28149 scope.go:117] "RemoveContainer" containerID="cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4" Mar 13 13:00:13.012554 master-0 kubenswrapper[28149]: I0313 13:00:13.012533 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4"} err="failed to get container status \"cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4\": rpc error: code = NotFound desc = could not find container \"cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4\": container with ID starting with cfdc65ea2c7ff083c70b5cbafce015ebaef2bec4eb6f5e6ef09dd3d02a87e5b4 not found: ID does not exist" Mar 13 13:00:13.012554 master-0 kubenswrapper[28149]: I0313 13:00:13.012552 28149 scope.go:117] "RemoveContainer" containerID="552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451" Mar 13 13:00:13.012813 master-0 kubenswrapper[28149]: I0313 13:00:13.012793 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451"} err="failed to get container status \"552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451\": rpc error: code = NotFound desc = could not find container \"552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451\": container with ID starting with 552eeff527d5a35e104a39621a65fa2da7a1df380403a303c48d0f6f3bca4451 not found: ID does not exist" Mar 13 13:00:13.012813 master-0 kubenswrapper[28149]: I0313 13:00:13.012810 28149 scope.go:117] "RemoveContainer" containerID="5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f" Mar 13 13:00:13.013246 master-0 kubenswrapper[28149]: I0313 13:00:13.013223 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f"} err="failed to get container status \"5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f\": rpc error: code = NotFound desc = could not find container \"5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f\": container with ID starting with 5225f7faf919f4bc9952279b8c17b48fc7fc5f38f60abb397f40ed2bc6a9712f not found: ID does not exist" Mar 13 13:00:13.013321 master-0 kubenswrapper[28149]: I0313 13:00:13.013246 28149 scope.go:117] "RemoveContainer" containerID="ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c" Mar 13 13:00:13.013554 master-0 kubenswrapper[28149]: I0313 13:00:13.013534 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c"} err="failed to get container status \"ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c\": rpc error: code = NotFound desc = could not find container \"ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c\": container with ID starting with ce8fbd2d677d2b615a5a88b0a30db1875f87de60b024e842112e21ebdf54651c not found: ID does not exist" Mar 13 13:00:13.013629 master-0 kubenswrapper[28149]: I0313 13:00:13.013617 28149 scope.go:117] "RemoveContainer" containerID="9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11" Mar 13 13:00:13.013959 master-0 kubenswrapper[28149]: I0313 13:00:13.013918 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11"} err="failed to get container status \"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11\": rpc error: code = NotFound desc = could not find container \"9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11\": container with ID starting with 9d8c499b649c8b47f8ee879f85d758879da02816a8ef90cde6964dab92a4ae11 not found: ID does not exist" Mar 13 13:00:14.263073 master-0 kubenswrapper[28149]: I0313 13:00:14.263005 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 13:00:14.369195 master-0 kubenswrapper[28149]: I0313 13:00:14.369125 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22335d40-8638-4d1a-81eb-821362a6ae89-var-lock\") pod \"22335d40-8638-4d1a-81eb-821362a6ae89\" (UID: \"22335d40-8638-4d1a-81eb-821362a6ae89\") " Mar 13 13:00:14.369524 master-0 kubenswrapper[28149]: I0313 13:00:14.369237 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22335d40-8638-4d1a-81eb-821362a6ae89-var-lock" (OuterVolumeSpecName: "var-lock") pod "22335d40-8638-4d1a-81eb-821362a6ae89" (UID: "22335d40-8638-4d1a-81eb-821362a6ae89"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:00:14.369574 master-0 kubenswrapper[28149]: I0313 13:00:14.369502 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22335d40-8638-4d1a-81eb-821362a6ae89-kube-api-access\") pod \"22335d40-8638-4d1a-81eb-821362a6ae89\" (UID: \"22335d40-8638-4d1a-81eb-821362a6ae89\") " Mar 13 13:00:14.369605 master-0 kubenswrapper[28149]: I0313 13:00:14.369580 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22335d40-8638-4d1a-81eb-821362a6ae89-kubelet-dir\") pod \"22335d40-8638-4d1a-81eb-821362a6ae89\" (UID: \"22335d40-8638-4d1a-81eb-821362a6ae89\") " Mar 13 13:00:14.369729 master-0 kubenswrapper[28149]: I0313 13:00:14.369701 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22335d40-8638-4d1a-81eb-821362a6ae89-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "22335d40-8638-4d1a-81eb-821362a6ae89" (UID: "22335d40-8638-4d1a-81eb-821362a6ae89"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:00:14.370241 master-0 kubenswrapper[28149]: I0313 13:00:14.370209 28149 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22335d40-8638-4d1a-81eb-821362a6ae89-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 13:00:14.370241 master-0 kubenswrapper[28149]: I0313 13:00:14.370236 28149 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22335d40-8638-4d1a-81eb-821362a6ae89-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 13:00:14.372429 master-0 kubenswrapper[28149]: I0313 13:00:14.372397 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22335d40-8638-4d1a-81eb-821362a6ae89-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "22335d40-8638-4d1a-81eb-821362a6ae89" (UID: "22335d40-8638-4d1a-81eb-821362a6ae89"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:00:14.471467 master-0 kubenswrapper[28149]: I0313 13:00:14.471307 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22335d40-8638-4d1a-81eb-821362a6ae89-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 13:00:14.930728 master-0 kubenswrapper[28149]: I0313 13:00:14.930360 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"22335d40-8638-4d1a-81eb-821362a6ae89","Type":"ContainerDied","Data":"d6c478d9992e0f8e96fa658933b033fc6821b33b7997a5ce7f69500ad9635a5d"} Mar 13 13:00:14.930728 master-0 kubenswrapper[28149]: I0313 13:00:14.930412 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6c478d9992e0f8e96fa658933b033fc6821b33b7997a5ce7f69500ad9635a5d" Mar 13 13:00:14.930728 master-0 kubenswrapper[28149]: I0313 13:00:14.930425 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 13:00:25.686746 master-0 kubenswrapper[28149]: I0313 13:00:25.686630 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:25.704592 master-0 kubenswrapper[28149]: I0313 13:00:25.704530 28149 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="52b1714d-2a8d-407a-994a-8da3fdf3f4f4" Mar 13 13:00:25.704750 master-0 kubenswrapper[28149]: I0313 13:00:25.704607 28149 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="52b1714d-2a8d-407a-994a-8da3fdf3f4f4" Mar 13 13:00:25.717468 master-0 kubenswrapper[28149]: I0313 13:00:25.717412 28149 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:25.726611 master-0 kubenswrapper[28149]: I0313 13:00:25.726547 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 13:00:25.733626 master-0 kubenswrapper[28149]: I0313 13:00:25.733569 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 13:00:25.736462 master-0 kubenswrapper[28149]: I0313 13:00:25.736419 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:25.741886 master-0 kubenswrapper[28149]: I0313 13:00:25.741836 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 13:00:26.126631 master-0 kubenswrapper[28149]: I0313 13:00:26.126594 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f5bb76e91dda25eb09e825c314a1cb06","Type":"ContainerStarted","Data":"c6adc1c7ef2611ef686efcfa2d71ce5217bfeb9e2c4817bb3b0e4f2f1aa20e73"} Mar 13 13:00:26.126840 master-0 kubenswrapper[28149]: I0313 13:00:26.126824 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f5bb76e91dda25eb09e825c314a1cb06","Type":"ContainerStarted","Data":"46fb6d1fa0bf23283f8eaf01ae75be6a25d865731ca4aa0bed3211ea51339e76"} Mar 13 13:00:27.134759 master-0 kubenswrapper[28149]: I0313 13:00:27.134687 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f5bb76e91dda25eb09e825c314a1cb06","Type":"ContainerStarted","Data":"8396dd67a4866bf682f6b16123d9e6972ec9d4e96fb5499818a4a719076bbe2b"} Mar 13 13:00:27.134759 master-0 kubenswrapper[28149]: I0313 13:00:27.134758 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f5bb76e91dda25eb09e825c314a1cb06","Type":"ContainerStarted","Data":"d1b25b5b8b0a35e04816caaff6762175b261d9f55b7521f52ad155edb9c1a34b"} Mar 13 13:00:27.135407 master-0 kubenswrapper[28149]: I0313 13:00:27.134780 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f5bb76e91dda25eb09e825c314a1cb06","Type":"ContainerStarted","Data":"c28bf136234216a5de7656cde0d72640a9a180ecb91edcd94e76e91266cc6858"} Mar 13 13:00:35.736812 master-0 kubenswrapper[28149]: I0313 13:00:35.736737 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:35.736812 master-0 kubenswrapper[28149]: I0313 13:00:35.736808 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:35.736812 master-0 kubenswrapper[28149]: I0313 13:00:35.736823 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:35.738604 master-0 kubenswrapper[28149]: I0313 13:00:35.737165 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:35.738604 master-0 kubenswrapper[28149]: I0313 13:00:35.737173 28149 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 13 13:00:35.738604 master-0 kubenswrapper[28149]: I0313 13:00:35.737247 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f5bb76e91dda25eb09e825c314a1cb06" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 13:00:35.742176 master-0 kubenswrapper[28149]: I0313 13:00:35.742098 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:35.785417 master-0 kubenswrapper[28149]: I0313 13:00:35.785344 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=10.7853275 podStartE2EDuration="10.7853275s" podCreationTimestamp="2026-03-13 13:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:00:27.165426702 +0000 UTC m=+400.818891861" watchObservedRunningTime="2026-03-13 13:00:35.7853275 +0000 UTC m=+409.438792659" Mar 13 13:00:36.245174 master-0 kubenswrapper[28149]: I0313 13:00:36.245089 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:45.737492 master-0 kubenswrapper[28149]: I0313 13:00:45.737435 28149 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 13 13:00:45.738458 master-0 kubenswrapper[28149]: I0313 13:00:45.737528 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f5bb76e91dda25eb09e825c314a1cb06" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 13:00:55.736910 master-0 kubenswrapper[28149]: I0313 13:00:55.736839 28149 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 13 13:00:55.737663 master-0 kubenswrapper[28149]: I0313 13:00:55.736916 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f5bb76e91dda25eb09e825c314a1cb06" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 13:00:55.737663 master-0 kubenswrapper[28149]: I0313 13:00:55.736978 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:55.737848 master-0 kubenswrapper[28149]: I0313 13:00:55.737802 28149 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"c6adc1c7ef2611ef686efcfa2d71ce5217bfeb9e2c4817bb3b0e4f2f1aa20e73"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 13 13:00:55.737984 master-0 kubenswrapper[28149]: I0313 13:00:55.737952 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f5bb76e91dda25eb09e825c314a1cb06" containerName="kube-controller-manager" containerID="cri-o://c6adc1c7ef2611ef686efcfa2d71ce5217bfeb9e2c4817bb3b0e4f2f1aa20e73" gracePeriod=30 Mar 13 13:01:26.684004 master-0 kubenswrapper[28149]: I0313 13:01:26.683941 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f5bb76e91dda25eb09e825c314a1cb06/kube-controller-manager/0.log" Mar 13 13:01:26.684732 master-0 kubenswrapper[28149]: I0313 13:01:26.684025 28149 generic.go:334] "Generic (PLEG): container finished" podID="f5bb76e91dda25eb09e825c314a1cb06" containerID="c6adc1c7ef2611ef686efcfa2d71ce5217bfeb9e2c4817bb3b0e4f2f1aa20e73" exitCode=137 Mar 13 13:01:26.684732 master-0 kubenswrapper[28149]: I0313 13:01:26.684081 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f5bb76e91dda25eb09e825c314a1cb06","Type":"ContainerDied","Data":"c6adc1c7ef2611ef686efcfa2d71ce5217bfeb9e2c4817bb3b0e4f2f1aa20e73"} Mar 13 13:01:26.684732 master-0 kubenswrapper[28149]: I0313 13:01:26.684112 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f5bb76e91dda25eb09e825c314a1cb06","Type":"ContainerStarted","Data":"a7f47b4629873083a76eb0d636d087e5ed997e84884d956845d5a5cc0b3204eb"} Mar 13 13:01:35.737115 master-0 kubenswrapper[28149]: I0313 13:01:35.736935 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:01:35.738029 master-0 kubenswrapper[28149]: I0313 13:01:35.737653 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:01:35.745693 master-0 kubenswrapper[28149]: I0313 13:01:35.745641 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:01:36.773707 master-0 kubenswrapper[28149]: I0313 13:01:36.773649 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:01:48.266666 master-0 kubenswrapper[28149]: I0313 13:01:48.266586 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-8sk9j"] Mar 13 13:01:48.267494 master-0 kubenswrapper[28149]: E0313 13:01:48.267048 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22335d40-8638-4d1a-81eb-821362a6ae89" containerName="installer" Mar 13 13:01:48.267494 master-0 kubenswrapper[28149]: I0313 13:01:48.267077 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="22335d40-8638-4d1a-81eb-821362a6ae89" containerName="installer" Mar 13 13:01:48.267494 master-0 kubenswrapper[28149]: I0313 13:01:48.267347 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="22335d40-8638-4d1a-81eb-821362a6ae89" containerName="installer" Mar 13 13:01:48.268190 master-0 kubenswrapper[28149]: I0313 13:01:48.268152 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:01:48.270423 master-0 kubenswrapper[28149]: I0313 13:01:48.270378 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Mar 13 13:01:48.270678 master-0 kubenswrapper[28149]: I0313 13:01:48.270651 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Mar 13 13:01:48.270883 master-0 kubenswrapper[28149]: I0313 13:01:48.270647 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Mar 13 13:01:48.273647 master-0 kubenswrapper[28149]: I0313 13:01:48.273611 28149 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Mar 13 13:01:48.280178 master-0 kubenswrapper[28149]: I0313 13:01:48.280101 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7c4766b9db-6tc2q"] Mar 13 13:01:48.281559 master-0 kubenswrapper[28149]: I0313 13:01:48.281528 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.291236 master-0 kubenswrapper[28149]: I0313 13:01:48.291111 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-8sk9j"] Mar 13 13:01:48.297103 master-0 kubenswrapper[28149]: I0313 13:01:48.297045 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c4766b9db-6tc2q"] Mar 13 13:01:48.390091 master-0 kubenswrapper[28149]: I0313 13:01:48.390019 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-oauth-config\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.390362 master-0 kubenswrapper[28149]: I0313 13:01:48.390126 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-config\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.390362 master-0 kubenswrapper[28149]: I0313 13:01:48.390215 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-service-ca\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.390362 master-0 kubenswrapper[28149]: I0313 13:01:48.390245 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/0728440d-287f-4cc8-bbc0-a00845e4ca8a-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-8sk9j\" (UID: \"0728440d-287f-4cc8-bbc0-a00845e4ca8a\") " pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:01:48.393367 master-0 kubenswrapper[28149]: I0313 13:01:48.391406 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf7dd\" (UniqueName: \"kubernetes.io/projected/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-kube-api-access-qf7dd\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.393367 master-0 kubenswrapper[28149]: I0313 13:01:48.391466 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0728440d-287f-4cc8-bbc0-a00845e4ca8a-os-client-config\") pod \"sushy-emulator-59477995f9-8sk9j\" (UID: \"0728440d-287f-4cc8-bbc0-a00845e4ca8a\") " pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:01:48.393367 master-0 kubenswrapper[28149]: I0313 13:01:48.391497 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-oauth-serving-cert\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.393367 master-0 kubenswrapper[28149]: I0313 13:01:48.391522 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-trusted-ca-bundle\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.393367 master-0 kubenswrapper[28149]: I0313 13:01:48.391576 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wvsg\" (UniqueName: \"kubernetes.io/projected/0728440d-287f-4cc8-bbc0-a00845e4ca8a-kube-api-access-4wvsg\") pod \"sushy-emulator-59477995f9-8sk9j\" (UID: \"0728440d-287f-4cc8-bbc0-a00845e4ca8a\") " pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:01:48.393367 master-0 kubenswrapper[28149]: I0313 13:01:48.391619 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-serving-cert\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.493055 master-0 kubenswrapper[28149]: I0313 13:01:48.492983 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-service-ca\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.493340 master-0 kubenswrapper[28149]: I0313 13:01:48.493065 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/0728440d-287f-4cc8-bbc0-a00845e4ca8a-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-8sk9j\" (UID: \"0728440d-287f-4cc8-bbc0-a00845e4ca8a\") " pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:01:48.493340 master-0 kubenswrapper[28149]: I0313 13:01:48.493175 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf7dd\" (UniqueName: \"kubernetes.io/projected/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-kube-api-access-qf7dd\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.493436 master-0 kubenswrapper[28149]: I0313 13:01:48.493340 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0728440d-287f-4cc8-bbc0-a00845e4ca8a-os-client-config\") pod \"sushy-emulator-59477995f9-8sk9j\" (UID: \"0728440d-287f-4cc8-bbc0-a00845e4ca8a\") " pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:01:48.493436 master-0 kubenswrapper[28149]: I0313 13:01:48.493390 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-oauth-serving-cert\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.493436 master-0 kubenswrapper[28149]: I0313 13:01:48.493408 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-trusted-ca-bundle\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.493618 master-0 kubenswrapper[28149]: I0313 13:01:48.493452 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wvsg\" (UniqueName: \"kubernetes.io/projected/0728440d-287f-4cc8-bbc0-a00845e4ca8a-kube-api-access-4wvsg\") pod \"sushy-emulator-59477995f9-8sk9j\" (UID: \"0728440d-287f-4cc8-bbc0-a00845e4ca8a\") " pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:01:48.493618 master-0 kubenswrapper[28149]: I0313 13:01:48.493488 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-serving-cert\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.493618 master-0 kubenswrapper[28149]: I0313 13:01:48.493505 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-oauth-config\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.493618 master-0 kubenswrapper[28149]: I0313 13:01:48.493550 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-config\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.494750 master-0 kubenswrapper[28149]: I0313 13:01:48.494197 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-service-ca\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.495029 master-0 kubenswrapper[28149]: I0313 13:01:48.494950 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-config\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.495464 master-0 kubenswrapper[28149]: I0313 13:01:48.495427 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/0728440d-287f-4cc8-bbc0-a00845e4ca8a-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-8sk9j\" (UID: \"0728440d-287f-4cc8-bbc0-a00845e4ca8a\") " pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:01:48.495781 master-0 kubenswrapper[28149]: I0313 13:01:48.495737 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-trusted-ca-bundle\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.497400 master-0 kubenswrapper[28149]: I0313 13:01:48.496889 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-oauth-serving-cert\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.501165 master-0 kubenswrapper[28149]: I0313 13:01:48.498572 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0728440d-287f-4cc8-bbc0-a00845e4ca8a-os-client-config\") pod \"sushy-emulator-59477995f9-8sk9j\" (UID: \"0728440d-287f-4cc8-bbc0-a00845e4ca8a\") " pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:01:48.501165 master-0 kubenswrapper[28149]: I0313 13:01:48.500868 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-serving-cert\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.512151 master-0 kubenswrapper[28149]: I0313 13:01:48.501466 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-oauth-config\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.526286 master-0 kubenswrapper[28149]: I0313 13:01:48.526178 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf7dd\" (UniqueName: \"kubernetes.io/projected/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-kube-api-access-qf7dd\") pod \"console-7c4766b9db-6tc2q\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:48.536130 master-0 kubenswrapper[28149]: I0313 13:01:48.536090 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wvsg\" (UniqueName: \"kubernetes.io/projected/0728440d-287f-4cc8-bbc0-a00845e4ca8a-kube-api-access-4wvsg\") pod \"sushy-emulator-59477995f9-8sk9j\" (UID: \"0728440d-287f-4cc8-bbc0-a00845e4ca8a\") " pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:01:48.597061 master-0 kubenswrapper[28149]: I0313 13:01:48.596975 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:01:48.609798 master-0 kubenswrapper[28149]: I0313 13:01:48.609719 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:49.071338 master-0 kubenswrapper[28149]: I0313 13:01:49.071269 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c4766b9db-6tc2q"] Mar 13 13:01:49.072673 master-0 kubenswrapper[28149]: W0313 13:01:49.072619 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27c9b1b2_ce2d_4837_8b91_3ca46ff394a7.slice/crio-ebb20900b7d5bedb388ae88ce024c06b2b57064b9fe3b33af9866c50c199dd21 WatchSource:0}: Error finding container ebb20900b7d5bedb388ae88ce024c06b2b57064b9fe3b33af9866c50c199dd21: Status 404 returned error can't find the container with id ebb20900b7d5bedb388ae88ce024c06b2b57064b9fe3b33af9866c50c199dd21 Mar 13 13:01:49.134864 master-0 kubenswrapper[28149]: I0313 13:01:49.134809 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-8sk9j"] Mar 13 13:01:49.136252 master-0 kubenswrapper[28149]: W0313 13:01:49.136217 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0728440d_287f_4cc8_bbc0_a00845e4ca8a.slice/crio-152c076fbc0d0a50c127a74722344fca80a7bc0130661c241c88aff10a6b77ae WatchSource:0}: Error finding container 152c076fbc0d0a50c127a74722344fca80a7bc0130661c241c88aff10a6b77ae: Status 404 returned error can't find the container with id 152c076fbc0d0a50c127a74722344fca80a7bc0130661c241c88aff10a6b77ae Mar 13 13:01:49.140614 master-0 kubenswrapper[28149]: I0313 13:01:49.138999 28149 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 13:01:49.915790 master-0 kubenswrapper[28149]: I0313 13:01:49.915730 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c4766b9db-6tc2q" event={"ID":"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7","Type":"ContainerStarted","Data":"41610067df452029f9607bdf787b606790c6069641790cfd739f45f3a915e0fb"} Mar 13 13:01:49.915790 master-0 kubenswrapper[28149]: I0313 13:01:49.915781 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c4766b9db-6tc2q" event={"ID":"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7","Type":"ContainerStarted","Data":"ebb20900b7d5bedb388ae88ce024c06b2b57064b9fe3b33af9866c50c199dd21"} Mar 13 13:01:49.918194 master-0 kubenswrapper[28149]: I0313 13:01:49.918118 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" event={"ID":"0728440d-287f-4cc8-bbc0-a00845e4ca8a","Type":"ContainerStarted","Data":"152c076fbc0d0a50c127a74722344fca80a7bc0130661c241c88aff10a6b77ae"} Mar 13 13:01:49.939994 master-0 kubenswrapper[28149]: I0313 13:01:49.939103 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7c4766b9db-6tc2q" podStartSLOduration=1.939078825 podStartE2EDuration="1.939078825s" podCreationTimestamp="2026-03-13 13:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:01:49.935294493 +0000 UTC m=+483.588759682" watchObservedRunningTime="2026-03-13 13:01:49.939078825 +0000 UTC m=+483.592543974" Mar 13 13:01:56.971115 master-0 kubenswrapper[28149]: I0313 13:01:56.971052 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" event={"ID":"0728440d-287f-4cc8-bbc0-a00845e4ca8a","Type":"ContainerStarted","Data":"4a07511656e57b28671840a7b61ceaf462ff0345356d1547bd1f4f899e61d31b"} Mar 13 13:01:56.998562 master-0 kubenswrapper[28149]: I0313 13:01:56.998481 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" podStartSLOduration=2.301568827 podStartE2EDuration="8.998458376s" podCreationTimestamp="2026-03-13 13:01:48 +0000 UTC" firstStartedPulling="2026-03-13 13:01:49.138903021 +0000 UTC m=+482.792368180" lastFinishedPulling="2026-03-13 13:01:55.83579255 +0000 UTC m=+489.489257729" observedRunningTime="2026-03-13 13:01:56.99037861 +0000 UTC m=+490.643843769" watchObservedRunningTime="2026-03-13 13:01:56.998458376 +0000 UTC m=+490.651923545" Mar 13 13:01:58.597466 master-0 kubenswrapper[28149]: I0313 13:01:58.597385 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:01:58.597466 master-0 kubenswrapper[28149]: I0313 13:01:58.597455 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:01:58.606227 master-0 kubenswrapper[28149]: I0313 13:01:58.606160 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:01:58.610176 master-0 kubenswrapper[28149]: I0313 13:01:58.610134 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:58.610257 master-0 kubenswrapper[28149]: I0313 13:01:58.610187 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:58.615198 master-0 kubenswrapper[28149]: I0313 13:01:58.615153 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:58.986244 master-0 kubenswrapper[28149]: I0313 13:01:58.986082 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:01:58.986810 master-0 kubenswrapper[28149]: I0313 13:01:58.986772 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:01:59.131086 master-0 kubenswrapper[28149]: I0313 13:01:59.131020 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-575b4c697b-kjnzx"] Mar 13 13:02:18.359094 master-0 kubenswrapper[28149]: I0313 13:02:18.359033 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-5669f94467-hcvt5"] Mar 13 13:02:18.360685 master-0 kubenswrapper[28149]: I0313 13:02:18.360648 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-5669f94467-hcvt5" Mar 13 13:02:18.363543 master-0 kubenswrapper[28149]: I0313 13:02:18.363492 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0a8314e8-583b-469f-a3c8-0d78eef2f58c-os-client-config\") pod \"nova-console-poller-5669f94467-hcvt5\" (UID: \"0a8314e8-583b-469f-a3c8-0d78eef2f58c\") " pod="sushy-emulator/nova-console-poller-5669f94467-hcvt5" Mar 13 13:02:18.363767 master-0 kubenswrapper[28149]: I0313 13:02:18.363546 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfcnq\" (UniqueName: \"kubernetes.io/projected/0a8314e8-583b-469f-a3c8-0d78eef2f58c-kube-api-access-rfcnq\") pod \"nova-console-poller-5669f94467-hcvt5\" (UID: \"0a8314e8-583b-469f-a3c8-0d78eef2f58c\") " pod="sushy-emulator/nova-console-poller-5669f94467-hcvt5" Mar 13 13:02:18.370249 master-0 kubenswrapper[28149]: I0313 13:02:18.370189 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-5669f94467-hcvt5"] Mar 13 13:02:18.464835 master-0 kubenswrapper[28149]: I0313 13:02:18.464783 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0a8314e8-583b-469f-a3c8-0d78eef2f58c-os-client-config\") pod \"nova-console-poller-5669f94467-hcvt5\" (UID: \"0a8314e8-583b-469f-a3c8-0d78eef2f58c\") " pod="sushy-emulator/nova-console-poller-5669f94467-hcvt5" Mar 13 13:02:18.464835 master-0 kubenswrapper[28149]: I0313 13:02:18.464833 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfcnq\" (UniqueName: \"kubernetes.io/projected/0a8314e8-583b-469f-a3c8-0d78eef2f58c-kube-api-access-rfcnq\") pod \"nova-console-poller-5669f94467-hcvt5\" (UID: \"0a8314e8-583b-469f-a3c8-0d78eef2f58c\") " pod="sushy-emulator/nova-console-poller-5669f94467-hcvt5" Mar 13 13:02:18.468470 master-0 kubenswrapper[28149]: I0313 13:02:18.468440 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0a8314e8-583b-469f-a3c8-0d78eef2f58c-os-client-config\") pod \"nova-console-poller-5669f94467-hcvt5\" (UID: \"0a8314e8-583b-469f-a3c8-0d78eef2f58c\") " pod="sushy-emulator/nova-console-poller-5669f94467-hcvt5" Mar 13 13:02:18.481258 master-0 kubenswrapper[28149]: I0313 13:02:18.481218 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfcnq\" (UniqueName: \"kubernetes.io/projected/0a8314e8-583b-469f-a3c8-0d78eef2f58c-kube-api-access-rfcnq\") pod \"nova-console-poller-5669f94467-hcvt5\" (UID: \"0a8314e8-583b-469f-a3c8-0d78eef2f58c\") " pod="sushy-emulator/nova-console-poller-5669f94467-hcvt5" Mar 13 13:02:18.678044 master-0 kubenswrapper[28149]: I0313 13:02:18.677896 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-5669f94467-hcvt5" Mar 13 13:02:19.135296 master-0 kubenswrapper[28149]: I0313 13:02:19.133834 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-5669f94467-hcvt5"] Mar 13 13:02:19.137327 master-0 kubenswrapper[28149]: W0313 13:02:19.137284 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a8314e8_583b_469f_a3c8_0d78eef2f58c.slice/crio-f51354aea0f9684b0f1782eac18e315b7cc3a5397f3f09edd1459785fcd37c86 WatchSource:0}: Error finding container f51354aea0f9684b0f1782eac18e315b7cc3a5397f3f09edd1459785fcd37c86: Status 404 returned error can't find the container with id f51354aea0f9684b0f1782eac18e315b7cc3a5397f3f09edd1459785fcd37c86 Mar 13 13:02:19.338791 master-0 kubenswrapper[28149]: I0313 13:02:19.338734 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-5669f94467-hcvt5" event={"ID":"0a8314e8-583b-469f-a3c8-0d78eef2f58c","Type":"ContainerStarted","Data":"f51354aea0f9684b0f1782eac18e315b7cc3a5397f3f09edd1459785fcd37c86"} Mar 13 13:02:24.410540 master-0 kubenswrapper[28149]: I0313 13:02:24.404262 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-575b4c697b-kjnzx" podUID="3c426507-418f-4258-bef6-4206640beb3d" containerName="console" containerID="cri-o://62a39b62dd321a9a78aa93cc0dbace3d5275bb08e7d86c7913fc8df6b17cff3f" gracePeriod=15 Mar 13 13:02:25.135057 master-0 kubenswrapper[28149]: I0313 13:02:25.134929 28149 patch_prober.go:28] interesting pod/console-575b4c697b-kjnzx container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Mar 13 13:02:25.135057 master-0 kubenswrapper[28149]: I0313 13:02:25.135026 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-575b4c697b-kjnzx" podUID="3c426507-418f-4258-bef6-4206640beb3d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Mar 13 13:02:25.457451 master-0 kubenswrapper[28149]: I0313 13:02:25.457387 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-5669f94467-hcvt5" event={"ID":"0a8314e8-583b-469f-a3c8-0d78eef2f58c","Type":"ContainerStarted","Data":"7844e1c416914b22659ec56d2f2c103cd125a0357a659597ea3aaba54bb846f5"} Mar 13 13:02:25.459473 master-0 kubenswrapper[28149]: I0313 13:02:25.459444 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-575b4c697b-kjnzx_3c426507-418f-4258-bef6-4206640beb3d/console/0.log" Mar 13 13:02:25.459473 master-0 kubenswrapper[28149]: I0313 13:02:25.459451 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-575b4c697b-kjnzx_3c426507-418f-4258-bef6-4206640beb3d/console/0.log" Mar 13 13:02:25.459599 master-0 kubenswrapper[28149]: I0313 13:02:25.459481 28149 generic.go:334] "Generic (PLEG): container finished" podID="3c426507-418f-4258-bef6-4206640beb3d" containerID="62a39b62dd321a9a78aa93cc0dbace3d5275bb08e7d86c7913fc8df6b17cff3f" exitCode=2 Mar 13 13:02:25.459599 master-0 kubenswrapper[28149]: I0313 13:02:25.459501 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-575b4c697b-kjnzx" event={"ID":"3c426507-418f-4258-bef6-4206640beb3d","Type":"ContainerDied","Data":"62a39b62dd321a9a78aa93cc0dbace3d5275bb08e7d86c7913fc8df6b17cff3f"} Mar 13 13:02:25.459599 master-0 kubenswrapper[28149]: I0313 13:02:25.459516 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-575b4c697b-kjnzx" event={"ID":"3c426507-418f-4258-bef6-4206640beb3d","Type":"ContainerDied","Data":"39d9010d8cd763ece37db8db6ba3978604453d407277d0be25bc7d9eee9120d5"} Mar 13 13:02:25.459599 master-0 kubenswrapper[28149]: I0313 13:02:25.459524 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 13:02:25.459599 master-0 kubenswrapper[28149]: I0313 13:02:25.459533 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39d9010d8cd763ece37db8db6ba3978604453d407277d0be25bc7d9eee9120d5" Mar 13 13:02:25.664033 master-0 kubenswrapper[28149]: I0313 13:02:25.662002 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-oauth-serving-cert\") pod \"3c426507-418f-4258-bef6-4206640beb3d\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " Mar 13 13:02:25.664033 master-0 kubenswrapper[28149]: I0313 13:02:25.662177 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxc85\" (UniqueName: \"kubernetes.io/projected/3c426507-418f-4258-bef6-4206640beb3d-kube-api-access-sxc85\") pod \"3c426507-418f-4258-bef6-4206640beb3d\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " Mar 13 13:02:25.664033 master-0 kubenswrapper[28149]: I0313 13:02:25.662294 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c426507-418f-4258-bef6-4206640beb3d-console-serving-cert\") pod \"3c426507-418f-4258-bef6-4206640beb3d\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " Mar 13 13:02:25.664033 master-0 kubenswrapper[28149]: I0313 13:02:25.662359 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-trusted-ca-bundle\") pod \"3c426507-418f-4258-bef6-4206640beb3d\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " Mar 13 13:02:25.664033 master-0 kubenswrapper[28149]: I0313 13:02:25.663172 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3c426507-418f-4258-bef6-4206640beb3d" (UID: "3c426507-418f-4258-bef6-4206640beb3d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:02:25.664033 master-0 kubenswrapper[28149]: I0313 13:02:25.663238 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3c426507-418f-4258-bef6-4206640beb3d" (UID: "3c426507-418f-4258-bef6-4206640beb3d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:02:25.664033 master-0 kubenswrapper[28149]: I0313 13:02:25.663276 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c426507-418f-4258-bef6-4206640beb3d-console-oauth-config\") pod \"3c426507-418f-4258-bef6-4206640beb3d\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " Mar 13 13:02:25.664033 master-0 kubenswrapper[28149]: I0313 13:02:25.663377 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-service-ca\") pod \"3c426507-418f-4258-bef6-4206640beb3d\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " Mar 13 13:02:25.664033 master-0 kubenswrapper[28149]: I0313 13:02:25.663411 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-console-config\") pod \"3c426507-418f-4258-bef6-4206640beb3d\" (UID: \"3c426507-418f-4258-bef6-4206640beb3d\") " Mar 13 13:02:25.664033 master-0 kubenswrapper[28149]: I0313 13:02:25.663836 28149 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 13:02:25.664033 master-0 kubenswrapper[28149]: I0313 13:02:25.663864 28149 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:02:25.664955 master-0 kubenswrapper[28149]: I0313 13:02:25.664724 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-console-config" (OuterVolumeSpecName: "console-config") pod "3c426507-418f-4258-bef6-4206640beb3d" (UID: "3c426507-418f-4258-bef6-4206640beb3d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:02:25.665346 master-0 kubenswrapper[28149]: I0313 13:02:25.665041 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-service-ca" (OuterVolumeSpecName: "service-ca") pod "3c426507-418f-4258-bef6-4206640beb3d" (UID: "3c426507-418f-4258-bef6-4206640beb3d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:02:25.669398 master-0 kubenswrapper[28149]: I0313 13:02:25.669340 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c426507-418f-4258-bef6-4206640beb3d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3c426507-418f-4258-bef6-4206640beb3d" (UID: "3c426507-418f-4258-bef6-4206640beb3d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:02:25.669524 master-0 kubenswrapper[28149]: I0313 13:02:25.669397 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c426507-418f-4258-bef6-4206640beb3d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3c426507-418f-4258-bef6-4206640beb3d" (UID: "3c426507-418f-4258-bef6-4206640beb3d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:02:25.669722 master-0 kubenswrapper[28149]: I0313 13:02:25.669629 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c426507-418f-4258-bef6-4206640beb3d-kube-api-access-sxc85" (OuterVolumeSpecName: "kube-api-access-sxc85") pod "3c426507-418f-4258-bef6-4206640beb3d" (UID: "3c426507-418f-4258-bef6-4206640beb3d"). InnerVolumeSpecName "kube-api-access-sxc85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:02:25.765127 master-0 kubenswrapper[28149]: I0313 13:02:25.765076 28149 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c426507-418f-4258-bef6-4206640beb3d-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 13:02:25.765127 master-0 kubenswrapper[28149]: I0313 13:02:25.765118 28149 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c426507-418f-4258-bef6-4206640beb3d-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:02:25.765127 master-0 kubenswrapper[28149]: I0313 13:02:25.765131 28149 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 13:02:25.765127 master-0 kubenswrapper[28149]: I0313 13:02:25.765163 28149 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c426507-418f-4258-bef6-4206640beb3d-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:02:25.765712 master-0 kubenswrapper[28149]: I0313 13:02:25.765178 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxc85\" (UniqueName: \"kubernetes.io/projected/3c426507-418f-4258-bef6-4206640beb3d-kube-api-access-sxc85\") on node \"master-0\" DevicePath \"\"" Mar 13 13:02:26.468618 master-0 kubenswrapper[28149]: I0313 13:02:26.468514 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-575b4c697b-kjnzx" Mar 13 13:02:26.484193 master-0 kubenswrapper[28149]: I0313 13:02:26.480288 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-5669f94467-hcvt5" event={"ID":"0a8314e8-583b-469f-a3c8-0d78eef2f58c","Type":"ContainerStarted","Data":"61b988c079fb7d4704923169329bc29f18a4ddfd56a86ce6d2f42f63834e6bf1"} Mar 13 13:02:26.537839 master-0 kubenswrapper[28149]: I0313 13:02:26.537729 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-5669f94467-hcvt5" podStartSLOduration=1.555877381 podStartE2EDuration="8.537703828s" podCreationTimestamp="2026-03-13 13:02:18 +0000 UTC" firstStartedPulling="2026-03-13 13:02:19.139656601 +0000 UTC m=+512.793121750" lastFinishedPulling="2026-03-13 13:02:26.121483038 +0000 UTC m=+519.774948197" observedRunningTime="2026-03-13 13:02:26.52722315 +0000 UTC m=+520.180688329" watchObservedRunningTime="2026-03-13 13:02:26.537703828 +0000 UTC m=+520.191168987" Mar 13 13:02:26.548441 master-0 kubenswrapper[28149]: I0313 13:02:26.548123 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-575b4c697b-kjnzx"] Mar 13 13:02:26.559174 master-0 kubenswrapper[28149]: I0313 13:02:26.556183 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-575b4c697b-kjnzx"] Mar 13 13:02:26.704009 master-0 kubenswrapper[28149]: I0313 13:02:26.700222 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c426507-418f-4258-bef6-4206640beb3d" path="/var/lib/kubelet/pods/3c426507-418f-4258-bef6-4206640beb3d/volumes" Mar 13 13:02:51.759644 master-0 kubenswrapper[28149]: I0313 13:02:51.759471 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-cbd4f787-dj5nr"] Mar 13 13:02:51.760443 master-0 kubenswrapper[28149]: E0313 13:02:51.760098 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c426507-418f-4258-bef6-4206640beb3d" containerName="console" Mar 13 13:02:51.760443 master-0 kubenswrapper[28149]: I0313 13:02:51.760113 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c426507-418f-4258-bef6-4206640beb3d" containerName="console" Mar 13 13:02:51.760443 master-0 kubenswrapper[28149]: I0313 13:02:51.760326 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c426507-418f-4258-bef6-4206640beb3d" containerName="console" Mar 13 13:02:51.761055 master-0 kubenswrapper[28149]: I0313 13:02:51.761015 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-cbd4f787-dj5nr" Mar 13 13:02:51.780909 master-0 kubenswrapper[28149]: I0313 13:02:51.780868 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-cbd4f787-dj5nr"] Mar 13 13:02:51.798212 master-0 kubenswrapper[28149]: I0313 13:02:51.797326 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/2af7384b-c49b-40ed-a3b7-3d87895a71f4-os-client-config\") pod \"nova-console-recorder-cbd4f787-dj5nr\" (UID: \"2af7384b-c49b-40ed-a3b7-3d87895a71f4\") " pod="sushy-emulator/nova-console-recorder-cbd4f787-dj5nr" Mar 13 13:02:51.899503 master-0 kubenswrapper[28149]: I0313 13:02:51.899403 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/2af7384b-c49b-40ed-a3b7-3d87895a71f4-os-client-config\") pod \"nova-console-recorder-cbd4f787-dj5nr\" (UID: \"2af7384b-c49b-40ed-a3b7-3d87895a71f4\") " pod="sushy-emulator/nova-console-recorder-cbd4f787-dj5nr" Mar 13 13:02:51.899730 master-0 kubenswrapper[28149]: I0313 13:02:51.899527 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfmgh\" (UniqueName: \"kubernetes.io/projected/2af7384b-c49b-40ed-a3b7-3d87895a71f4-kube-api-access-tfmgh\") pod \"nova-console-recorder-cbd4f787-dj5nr\" (UID: \"2af7384b-c49b-40ed-a3b7-3d87895a71f4\") " pod="sushy-emulator/nova-console-recorder-cbd4f787-dj5nr" Mar 13 13:02:51.899730 master-0 kubenswrapper[28149]: I0313 13:02:51.899582 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/2af7384b-c49b-40ed-a3b7-3d87895a71f4-nova-console-recordings-pv\") pod \"nova-console-recorder-cbd4f787-dj5nr\" (UID: \"2af7384b-c49b-40ed-a3b7-3d87895a71f4\") " pod="sushy-emulator/nova-console-recorder-cbd4f787-dj5nr" Mar 13 13:02:51.904175 master-0 kubenswrapper[28149]: I0313 13:02:51.904098 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/2af7384b-c49b-40ed-a3b7-3d87895a71f4-os-client-config\") pod \"nova-console-recorder-cbd4f787-dj5nr\" (UID: \"2af7384b-c49b-40ed-a3b7-3d87895a71f4\") " pod="sushy-emulator/nova-console-recorder-cbd4f787-dj5nr" Mar 13 13:02:52.000868 master-0 kubenswrapper[28149]: I0313 13:02:52.000800 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfmgh\" (UniqueName: \"kubernetes.io/projected/2af7384b-c49b-40ed-a3b7-3d87895a71f4-kube-api-access-tfmgh\") pod \"nova-console-recorder-cbd4f787-dj5nr\" (UID: \"2af7384b-c49b-40ed-a3b7-3d87895a71f4\") " pod="sushy-emulator/nova-console-recorder-cbd4f787-dj5nr" Mar 13 13:02:52.001634 master-0 kubenswrapper[28149]: I0313 13:02:52.001261 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/2af7384b-c49b-40ed-a3b7-3d87895a71f4-nova-console-recordings-pv\") pod \"nova-console-recorder-cbd4f787-dj5nr\" (UID: \"2af7384b-c49b-40ed-a3b7-3d87895a71f4\") " pod="sushy-emulator/nova-console-recorder-cbd4f787-dj5nr" Mar 13 13:02:52.021490 master-0 kubenswrapper[28149]: I0313 13:02:52.021430 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfmgh\" (UniqueName: \"kubernetes.io/projected/2af7384b-c49b-40ed-a3b7-3d87895a71f4-kube-api-access-tfmgh\") pod \"nova-console-recorder-cbd4f787-dj5nr\" (UID: \"2af7384b-c49b-40ed-a3b7-3d87895a71f4\") " pod="sushy-emulator/nova-console-recorder-cbd4f787-dj5nr" Mar 13 13:02:52.615567 master-0 kubenswrapper[28149]: I0313 13:02:52.615510 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/2af7384b-c49b-40ed-a3b7-3d87895a71f4-nova-console-recordings-pv\") pod \"nova-console-recorder-cbd4f787-dj5nr\" (UID: \"2af7384b-c49b-40ed-a3b7-3d87895a71f4\") " pod="sushy-emulator/nova-console-recorder-cbd4f787-dj5nr" Mar 13 13:02:52.678302 master-0 kubenswrapper[28149]: I0313 13:02:52.678220 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-cbd4f787-dj5nr" Mar 13 13:02:53.110749 master-0 kubenswrapper[28149]: I0313 13:02:53.110675 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-cbd4f787-dj5nr"] Mar 13 13:02:53.711577 master-0 kubenswrapper[28149]: I0313 13:02:53.711496 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-cbd4f787-dj5nr" event={"ID":"2af7384b-c49b-40ed-a3b7-3d87895a71f4","Type":"ContainerStarted","Data":"1976708f7765dea7b5e449c22b4a3de61307017a4bce5804bc6ee7483ae511ef"} Mar 13 13:03:03.001167 master-0 kubenswrapper[28149]: I0313 13:03:03.000985 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-cbd4f787-dj5nr" event={"ID":"2af7384b-c49b-40ed-a3b7-3d87895a71f4","Type":"ContainerStarted","Data":"e252995c2967ca18e0838e1cd57ff3a6e5b7ee03ac5edef6c9e8f4873c8e5033"} Mar 13 13:03:04.037390 master-0 kubenswrapper[28149]: I0313 13:03:04.037309 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-cbd4f787-dj5nr" event={"ID":"2af7384b-c49b-40ed-a3b7-3d87895a71f4","Type":"ContainerStarted","Data":"b88d522245470192a1cb148f3f38e11bc7232efba817ac1d5fa8a42cd6729c9a"} Mar 13 13:03:04.105255 master-0 kubenswrapper[28149]: I0313 13:03:04.105135 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-cbd4f787-dj5nr" podStartSLOduration=3.008689565 podStartE2EDuration="13.105102065s" podCreationTimestamp="2026-03-13 13:02:51 +0000 UTC" firstStartedPulling="2026-03-13 13:02:53.128822555 +0000 UTC m=+546.782287724" lastFinishedPulling="2026-03-13 13:03:03.225235065 +0000 UTC m=+556.878700224" observedRunningTime="2026-03-13 13:03:04.095997524 +0000 UTC m=+557.749462693" watchObservedRunningTime="2026-03-13 13:03:04.105102065 +0000 UTC m=+557.758567224" Mar 13 13:03:32.760421 master-0 kubenswrapper[28149]: I0313 13:03:32.757690 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw"] Mar 13 13:03:32.760421 master-0 kubenswrapper[28149]: I0313 13:03:32.759849 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" Mar 13 13:03:32.762050 master-0 kubenswrapper[28149]: I0313 13:03:32.761996 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-kbf84" Mar 13 13:03:32.774460 master-0 kubenswrapper[28149]: I0313 13:03:32.774209 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw"] Mar 13 13:03:32.809786 master-0 kubenswrapper[28149]: I0313 13:03:32.809721 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8bfee105-de14-48bf-a066-dbf73d190e5a-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw\" (UID: \"8bfee105-de14-48bf-a066-dbf73d190e5a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" Mar 13 13:03:32.810053 master-0 kubenswrapper[28149]: I0313 13:03:32.809834 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc529\" (UniqueName: \"kubernetes.io/projected/8bfee105-de14-48bf-a066-dbf73d190e5a-kube-api-access-zc529\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw\" (UID: \"8bfee105-de14-48bf-a066-dbf73d190e5a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" Mar 13 13:03:32.810053 master-0 kubenswrapper[28149]: I0313 13:03:32.809899 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8bfee105-de14-48bf-a066-dbf73d190e5a-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw\" (UID: \"8bfee105-de14-48bf-a066-dbf73d190e5a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" Mar 13 13:03:32.911076 master-0 kubenswrapper[28149]: I0313 13:03:32.910991 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8bfee105-de14-48bf-a066-dbf73d190e5a-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw\" (UID: \"8bfee105-de14-48bf-a066-dbf73d190e5a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" Mar 13 13:03:32.911339 master-0 kubenswrapper[28149]: I0313 13:03:32.911173 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc529\" (UniqueName: \"kubernetes.io/projected/8bfee105-de14-48bf-a066-dbf73d190e5a-kube-api-access-zc529\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw\" (UID: \"8bfee105-de14-48bf-a066-dbf73d190e5a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" Mar 13 13:03:32.911339 master-0 kubenswrapper[28149]: I0313 13:03:32.911236 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8bfee105-de14-48bf-a066-dbf73d190e5a-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw\" (UID: \"8bfee105-de14-48bf-a066-dbf73d190e5a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" Mar 13 13:03:32.911844 master-0 kubenswrapper[28149]: I0313 13:03:32.911816 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8bfee105-de14-48bf-a066-dbf73d190e5a-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw\" (UID: \"8bfee105-de14-48bf-a066-dbf73d190e5a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" Mar 13 13:03:32.911964 master-0 kubenswrapper[28149]: I0313 13:03:32.911918 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8bfee105-de14-48bf-a066-dbf73d190e5a-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw\" (UID: \"8bfee105-de14-48bf-a066-dbf73d190e5a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" Mar 13 13:03:32.930760 master-0 kubenswrapper[28149]: I0313 13:03:32.930705 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc529\" (UniqueName: \"kubernetes.io/projected/8bfee105-de14-48bf-a066-dbf73d190e5a-kube-api-access-zc529\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw\" (UID: \"8bfee105-de14-48bf-a066-dbf73d190e5a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" Mar 13 13:03:33.130300 master-0 kubenswrapper[28149]: I0313 13:03:33.130211 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" Mar 13 13:03:33.596590 master-0 kubenswrapper[28149]: I0313 13:03:33.596547 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw"] Mar 13 13:03:33.603449 master-0 kubenswrapper[28149]: W0313 13:03:33.603381 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bfee105_de14_48bf_a066_dbf73d190e5a.slice/crio-f2ae2ef306602271800d1c8fb4c469d891dd3f2de7c433351b3c62d88481c99d WatchSource:0}: Error finding container f2ae2ef306602271800d1c8fb4c469d891dd3f2de7c433351b3c62d88481c99d: Status 404 returned error can't find the container with id f2ae2ef306602271800d1c8fb4c469d891dd3f2de7c433351b3c62d88481c99d Mar 13 13:03:34.265919 master-0 kubenswrapper[28149]: I0313 13:03:34.265776 28149 generic.go:334] "Generic (PLEG): container finished" podID="8bfee105-de14-48bf-a066-dbf73d190e5a" containerID="2c322861691900403b9a76aa9d153d9e5ff87f3e4d0cb506081bb9d7cb155a6e" exitCode=0 Mar 13 13:03:34.265919 master-0 kubenswrapper[28149]: I0313 13:03:34.265832 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" event={"ID":"8bfee105-de14-48bf-a066-dbf73d190e5a","Type":"ContainerDied","Data":"2c322861691900403b9a76aa9d153d9e5ff87f3e4d0cb506081bb9d7cb155a6e"} Mar 13 13:03:34.265919 master-0 kubenswrapper[28149]: I0313 13:03:34.265864 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" event={"ID":"8bfee105-de14-48bf-a066-dbf73d190e5a","Type":"ContainerStarted","Data":"f2ae2ef306602271800d1c8fb4c469d891dd3f2de7c433351b3c62d88481c99d"} Mar 13 13:03:36.318398 master-0 kubenswrapper[28149]: I0313 13:03:36.318317 28149 generic.go:334] "Generic (PLEG): container finished" podID="8bfee105-de14-48bf-a066-dbf73d190e5a" containerID="20fe87737a9a314211fa891c7781a3079cdcf4d52713b3f0a2d51bdb7c67ff43" exitCode=0 Mar 13 13:03:36.318398 master-0 kubenswrapper[28149]: I0313 13:03:36.318377 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" event={"ID":"8bfee105-de14-48bf-a066-dbf73d190e5a","Type":"ContainerDied","Data":"20fe87737a9a314211fa891c7781a3079cdcf4d52713b3f0a2d51bdb7c67ff43"} Mar 13 13:03:37.328777 master-0 kubenswrapper[28149]: I0313 13:03:37.328717 28149 generic.go:334] "Generic (PLEG): container finished" podID="8bfee105-de14-48bf-a066-dbf73d190e5a" containerID="5209687f9d09e47123556e3dbf4beba11021e8771a095e1cd71940168cda7b98" exitCode=0 Mar 13 13:03:37.328777 master-0 kubenswrapper[28149]: I0313 13:03:37.328759 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" event={"ID":"8bfee105-de14-48bf-a066-dbf73d190e5a","Type":"ContainerDied","Data":"5209687f9d09e47123556e3dbf4beba11021e8771a095e1cd71940168cda7b98"} Mar 13 13:03:38.620056 master-0 kubenswrapper[28149]: I0313 13:03:38.619934 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" Mar 13 13:03:38.818988 master-0 kubenswrapper[28149]: I0313 13:03:38.818904 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc529\" (UniqueName: \"kubernetes.io/projected/8bfee105-de14-48bf-a066-dbf73d190e5a-kube-api-access-zc529\") pod \"8bfee105-de14-48bf-a066-dbf73d190e5a\" (UID: \"8bfee105-de14-48bf-a066-dbf73d190e5a\") " Mar 13 13:03:38.819281 master-0 kubenswrapper[28149]: I0313 13:03:38.819100 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8bfee105-de14-48bf-a066-dbf73d190e5a-bundle\") pod \"8bfee105-de14-48bf-a066-dbf73d190e5a\" (UID: \"8bfee105-de14-48bf-a066-dbf73d190e5a\") " Mar 13 13:03:38.819281 master-0 kubenswrapper[28149]: I0313 13:03:38.819134 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8bfee105-de14-48bf-a066-dbf73d190e5a-util\") pod \"8bfee105-de14-48bf-a066-dbf73d190e5a\" (UID: \"8bfee105-de14-48bf-a066-dbf73d190e5a\") " Mar 13 13:03:38.820599 master-0 kubenswrapper[28149]: I0313 13:03:38.820524 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bfee105-de14-48bf-a066-dbf73d190e5a-bundle" (OuterVolumeSpecName: "bundle") pod "8bfee105-de14-48bf-a066-dbf73d190e5a" (UID: "8bfee105-de14-48bf-a066-dbf73d190e5a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:03:38.822178 master-0 kubenswrapper[28149]: I0313 13:03:38.822103 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bfee105-de14-48bf-a066-dbf73d190e5a-kube-api-access-zc529" (OuterVolumeSpecName: "kube-api-access-zc529") pod "8bfee105-de14-48bf-a066-dbf73d190e5a" (UID: "8bfee105-de14-48bf-a066-dbf73d190e5a"). InnerVolumeSpecName "kube-api-access-zc529". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:03:38.833543 master-0 kubenswrapper[28149]: I0313 13:03:38.833431 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bfee105-de14-48bf-a066-dbf73d190e5a-util" (OuterVolumeSpecName: "util") pod "8bfee105-de14-48bf-a066-dbf73d190e5a" (UID: "8bfee105-de14-48bf-a066-dbf73d190e5a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:03:38.921129 master-0 kubenswrapper[28149]: I0313 13:03:38.920986 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc529\" (UniqueName: \"kubernetes.io/projected/8bfee105-de14-48bf-a066-dbf73d190e5a-kube-api-access-zc529\") on node \"master-0\" DevicePath \"\"" Mar 13 13:03:38.921129 master-0 kubenswrapper[28149]: I0313 13:03:38.921049 28149 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8bfee105-de14-48bf-a066-dbf73d190e5a-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:03:38.921129 master-0 kubenswrapper[28149]: I0313 13:03:38.921062 28149 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8bfee105-de14-48bf-a066-dbf73d190e5a-util\") on node \"master-0\" DevicePath \"\"" Mar 13 13:03:39.351583 master-0 kubenswrapper[28149]: I0313 13:03:39.351509 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" event={"ID":"8bfee105-de14-48bf-a066-dbf73d190e5a","Type":"ContainerDied","Data":"f2ae2ef306602271800d1c8fb4c469d891dd3f2de7c433351b3c62d88481c99d"} Mar 13 13:03:39.351583 master-0 kubenswrapper[28149]: I0313 13:03:39.351580 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2ae2ef306602271800d1c8fb4c469d891dd3f2de7c433351b3c62d88481c99d" Mar 13 13:03:39.351864 master-0 kubenswrapper[28149]: I0313 13:03:39.351661 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d44mrdw" Mar 13 13:03:47.425241 master-0 kubenswrapper[28149]: I0313 13:03:47.425165 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-75849445b5-dr589"] Mar 13 13:03:47.426074 master-0 kubenswrapper[28149]: E0313 13:03:47.425443 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bfee105-de14-48bf-a066-dbf73d190e5a" containerName="pull" Mar 13 13:03:47.426074 master-0 kubenswrapper[28149]: I0313 13:03:47.425464 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfee105-de14-48bf-a066-dbf73d190e5a" containerName="pull" Mar 13 13:03:47.426074 master-0 kubenswrapper[28149]: E0313 13:03:47.425505 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bfee105-de14-48bf-a066-dbf73d190e5a" containerName="util" Mar 13 13:03:47.426074 master-0 kubenswrapper[28149]: I0313 13:03:47.425512 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfee105-de14-48bf-a066-dbf73d190e5a" containerName="util" Mar 13 13:03:47.426074 master-0 kubenswrapper[28149]: E0313 13:03:47.425522 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bfee105-de14-48bf-a066-dbf73d190e5a" containerName="extract" Mar 13 13:03:47.426074 master-0 kubenswrapper[28149]: I0313 13:03:47.425529 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfee105-de14-48bf-a066-dbf73d190e5a" containerName="extract" Mar 13 13:03:47.426074 master-0 kubenswrapper[28149]: I0313 13:03:47.425669 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bfee105-de14-48bf-a066-dbf73d190e5a" containerName="extract" Mar 13 13:03:47.426395 master-0 kubenswrapper[28149]: I0313 13:03:47.426280 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:47.429045 master-0 kubenswrapper[28149]: I0313 13:03:47.429009 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 13 13:03:47.431206 master-0 kubenswrapper[28149]: I0313 13:03:47.431167 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 13 13:03:47.431401 master-0 kubenswrapper[28149]: I0313 13:03:47.431369 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 13 13:03:47.431616 master-0 kubenswrapper[28149]: I0313 13:03:47.431585 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 13 13:03:47.432296 master-0 kubenswrapper[28149]: I0313 13:03:47.432268 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 13 13:03:47.454786 master-0 kubenswrapper[28149]: I0313 13:03:47.454715 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-75849445b5-dr589"] Mar 13 13:03:47.624459 master-0 kubenswrapper[28149]: I0313 13:03:47.624418 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjsw9\" (UniqueName: \"kubernetes.io/projected/752362ab-8299-4370-aefa-c8b6d76e310e-kube-api-access-xjsw9\") pod \"lvms-operator-75849445b5-dr589\" (UID: \"752362ab-8299-4370-aefa-c8b6d76e310e\") " pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:47.624769 master-0 kubenswrapper[28149]: I0313 13:03:47.624493 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/752362ab-8299-4370-aefa-c8b6d76e310e-apiservice-cert\") pod \"lvms-operator-75849445b5-dr589\" (UID: \"752362ab-8299-4370-aefa-c8b6d76e310e\") " pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:47.624769 master-0 kubenswrapper[28149]: I0313 13:03:47.624529 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/752362ab-8299-4370-aefa-c8b6d76e310e-metrics-cert\") pod \"lvms-operator-75849445b5-dr589\" (UID: \"752362ab-8299-4370-aefa-c8b6d76e310e\") " pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:47.624769 master-0 kubenswrapper[28149]: I0313 13:03:47.624548 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/752362ab-8299-4370-aefa-c8b6d76e310e-socket-dir\") pod \"lvms-operator-75849445b5-dr589\" (UID: \"752362ab-8299-4370-aefa-c8b6d76e310e\") " pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:47.624769 master-0 kubenswrapper[28149]: I0313 13:03:47.624634 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/752362ab-8299-4370-aefa-c8b6d76e310e-webhook-cert\") pod \"lvms-operator-75849445b5-dr589\" (UID: \"752362ab-8299-4370-aefa-c8b6d76e310e\") " pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:47.726890 master-0 kubenswrapper[28149]: I0313 13:03:47.726737 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/752362ab-8299-4370-aefa-c8b6d76e310e-webhook-cert\") pod \"lvms-operator-75849445b5-dr589\" (UID: \"752362ab-8299-4370-aefa-c8b6d76e310e\") " pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:47.727099 master-0 kubenswrapper[28149]: I0313 13:03:47.726892 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjsw9\" (UniqueName: \"kubernetes.io/projected/752362ab-8299-4370-aefa-c8b6d76e310e-kube-api-access-xjsw9\") pod \"lvms-operator-75849445b5-dr589\" (UID: \"752362ab-8299-4370-aefa-c8b6d76e310e\") " pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:47.727821 master-0 kubenswrapper[28149]: I0313 13:03:47.727785 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/752362ab-8299-4370-aefa-c8b6d76e310e-apiservice-cert\") pod \"lvms-operator-75849445b5-dr589\" (UID: \"752362ab-8299-4370-aefa-c8b6d76e310e\") " pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:47.727917 master-0 kubenswrapper[28149]: I0313 13:03:47.727847 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/752362ab-8299-4370-aefa-c8b6d76e310e-metrics-cert\") pod \"lvms-operator-75849445b5-dr589\" (UID: \"752362ab-8299-4370-aefa-c8b6d76e310e\") " pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:47.728451 master-0 kubenswrapper[28149]: I0313 13:03:47.728392 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/752362ab-8299-4370-aefa-c8b6d76e310e-socket-dir\") pod \"lvms-operator-75849445b5-dr589\" (UID: \"752362ab-8299-4370-aefa-c8b6d76e310e\") " pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:47.728940 master-0 kubenswrapper[28149]: I0313 13:03:47.728900 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/752362ab-8299-4370-aefa-c8b6d76e310e-socket-dir\") pod \"lvms-operator-75849445b5-dr589\" (UID: \"752362ab-8299-4370-aefa-c8b6d76e310e\") " pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:47.730967 master-0 kubenswrapper[28149]: I0313 13:03:47.730913 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/752362ab-8299-4370-aefa-c8b6d76e310e-webhook-cert\") pod \"lvms-operator-75849445b5-dr589\" (UID: \"752362ab-8299-4370-aefa-c8b6d76e310e\") " pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:47.731737 master-0 kubenswrapper[28149]: I0313 13:03:47.731686 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/752362ab-8299-4370-aefa-c8b6d76e310e-metrics-cert\") pod \"lvms-operator-75849445b5-dr589\" (UID: \"752362ab-8299-4370-aefa-c8b6d76e310e\") " pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:47.732900 master-0 kubenswrapper[28149]: I0313 13:03:47.732702 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/752362ab-8299-4370-aefa-c8b6d76e310e-apiservice-cert\") pod \"lvms-operator-75849445b5-dr589\" (UID: \"752362ab-8299-4370-aefa-c8b6d76e310e\") " pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:47.749156 master-0 kubenswrapper[28149]: I0313 13:03:47.749088 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjsw9\" (UniqueName: \"kubernetes.io/projected/752362ab-8299-4370-aefa-c8b6d76e310e-kube-api-access-xjsw9\") pod \"lvms-operator-75849445b5-dr589\" (UID: \"752362ab-8299-4370-aefa-c8b6d76e310e\") " pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:48.044467 master-0 kubenswrapper[28149]: I0313 13:03:48.044412 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:48.696419 master-0 kubenswrapper[28149]: I0313 13:03:48.696386 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-75849445b5-dr589"] Mar 13 13:03:49.428198 master-0 kubenswrapper[28149]: I0313 13:03:49.428107 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-75849445b5-dr589" event={"ID":"752362ab-8299-4370-aefa-c8b6d76e310e","Type":"ContainerStarted","Data":"00b1f995bcd9dec7b740a7f3601e628a2c1e4a3e3af8686962e3252d9d134150"} Mar 13 13:03:55.500352 master-0 kubenswrapper[28149]: I0313 13:03:55.500277 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-75849445b5-dr589" event={"ID":"752362ab-8299-4370-aefa-c8b6d76e310e","Type":"ContainerStarted","Data":"5805fbb525103e2f33597b86c122fcaa128a664c3925e331eda03887ce7d6d6c"} Mar 13 13:03:55.501269 master-0 kubenswrapper[28149]: I0313 13:03:55.500850 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:55.507783 master-0 kubenswrapper[28149]: I0313 13:03:55.507748 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-75849445b5-dr589" Mar 13 13:03:55.603978 master-0 kubenswrapper[28149]: I0313 13:03:55.603887 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-75849445b5-dr589" podStartSLOduration=2.931540678 podStartE2EDuration="8.603867832s" podCreationTimestamp="2026-03-13 13:03:47 +0000 UTC" firstStartedPulling="2026-03-13 13:03:48.700393 +0000 UTC m=+602.353858159" lastFinishedPulling="2026-03-13 13:03:54.372720154 +0000 UTC m=+608.026185313" observedRunningTime="2026-03-13 13:03:55.557362918 +0000 UTC m=+609.210828067" watchObservedRunningTime="2026-03-13 13:03:55.603867832 +0000 UTC m=+609.257332991" Mar 13 13:03:59.171674 master-0 kubenswrapper[28149]: I0313 13:03:59.171614 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg"] Mar 13 13:03:59.173234 master-0 kubenswrapper[28149]: I0313 13:03:59.173197 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" Mar 13 13:03:59.175417 master-0 kubenswrapper[28149]: I0313 13:03:59.175378 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-kbf84" Mar 13 13:03:59.190271 master-0 kubenswrapper[28149]: I0313 13:03:59.189466 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg"] Mar 13 13:03:59.226583 master-0 kubenswrapper[28149]: I0313 13:03:59.226526 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12a5c9b1-e543-474d-bb59-6e7d08ee878f-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg\" (UID: \"12a5c9b1-e543-474d-bb59-6e7d08ee878f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" Mar 13 13:03:59.226829 master-0 kubenswrapper[28149]: I0313 13:03:59.226767 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12a5c9b1-e543-474d-bb59-6e7d08ee878f-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg\" (UID: \"12a5c9b1-e543-474d-bb59-6e7d08ee878f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" Mar 13 13:03:59.227117 master-0 kubenswrapper[28149]: I0313 13:03:59.227076 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dvq4\" (UniqueName: \"kubernetes.io/projected/12a5c9b1-e543-474d-bb59-6e7d08ee878f-kube-api-access-9dvq4\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg\" (UID: \"12a5c9b1-e543-474d-bb59-6e7d08ee878f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" Mar 13 13:03:59.328631 master-0 kubenswrapper[28149]: I0313 13:03:59.328467 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dvq4\" (UniqueName: \"kubernetes.io/projected/12a5c9b1-e543-474d-bb59-6e7d08ee878f-kube-api-access-9dvq4\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg\" (UID: \"12a5c9b1-e543-474d-bb59-6e7d08ee878f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" Mar 13 13:03:59.328998 master-0 kubenswrapper[28149]: I0313 13:03:59.328751 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12a5c9b1-e543-474d-bb59-6e7d08ee878f-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg\" (UID: \"12a5c9b1-e543-474d-bb59-6e7d08ee878f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" Mar 13 13:03:59.328998 master-0 kubenswrapper[28149]: I0313 13:03:59.328823 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12a5c9b1-e543-474d-bb59-6e7d08ee878f-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg\" (UID: \"12a5c9b1-e543-474d-bb59-6e7d08ee878f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" Mar 13 13:03:59.329531 master-0 kubenswrapper[28149]: I0313 13:03:59.329458 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12a5c9b1-e543-474d-bb59-6e7d08ee878f-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg\" (UID: \"12a5c9b1-e543-474d-bb59-6e7d08ee878f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" Mar 13 13:03:59.329700 master-0 kubenswrapper[28149]: I0313 13:03:59.329610 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12a5c9b1-e543-474d-bb59-6e7d08ee878f-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg\" (UID: \"12a5c9b1-e543-474d-bb59-6e7d08ee878f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" Mar 13 13:03:59.362314 master-0 kubenswrapper[28149]: I0313 13:03:59.362220 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dvq4\" (UniqueName: \"kubernetes.io/projected/12a5c9b1-e543-474d-bb59-6e7d08ee878f-kube-api-access-9dvq4\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg\" (UID: \"12a5c9b1-e543-474d-bb59-6e7d08ee878f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" Mar 13 13:03:59.492997 master-0 kubenswrapper[28149]: I0313 13:03:59.492848 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" Mar 13 13:03:59.782972 master-0 kubenswrapper[28149]: I0313 13:03:59.781921 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7"] Mar 13 13:03:59.784599 master-0 kubenswrapper[28149]: I0313 13:03:59.784074 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" Mar 13 13:04:00.007172 master-0 kubenswrapper[28149]: I0313 13:04:00.005205 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7"] Mar 13 13:04:00.049238 master-0 kubenswrapper[28149]: I0313 13:04:00.048974 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26e358a3-26d0-403c-baba-35680a60e33d-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7\" (UID: \"26e358a3-26d0-403c-baba-35680a60e33d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" Mar 13 13:04:00.049238 master-0 kubenswrapper[28149]: I0313 13:04:00.049029 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccc7k\" (UniqueName: \"kubernetes.io/projected/26e358a3-26d0-403c-baba-35680a60e33d-kube-api-access-ccc7k\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7\" (UID: \"26e358a3-26d0-403c-baba-35680a60e33d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" Mar 13 13:04:00.049238 master-0 kubenswrapper[28149]: I0313 13:04:00.049114 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26e358a3-26d0-403c-baba-35680a60e33d-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7\" (UID: \"26e358a3-26d0-403c-baba-35680a60e33d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" Mar 13 13:04:00.158314 master-0 kubenswrapper[28149]: I0313 13:04:00.158256 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26e358a3-26d0-403c-baba-35680a60e33d-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7\" (UID: \"26e358a3-26d0-403c-baba-35680a60e33d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" Mar 13 13:04:00.158410 master-0 kubenswrapper[28149]: I0313 13:04:00.158321 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccc7k\" (UniqueName: \"kubernetes.io/projected/26e358a3-26d0-403c-baba-35680a60e33d-kube-api-access-ccc7k\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7\" (UID: \"26e358a3-26d0-403c-baba-35680a60e33d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" Mar 13 13:04:00.158410 master-0 kubenswrapper[28149]: I0313 13:04:00.158382 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26e358a3-26d0-403c-baba-35680a60e33d-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7\" (UID: \"26e358a3-26d0-403c-baba-35680a60e33d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" Mar 13 13:04:00.159121 master-0 kubenswrapper[28149]: I0313 13:04:00.159099 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26e358a3-26d0-403c-baba-35680a60e33d-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7\" (UID: \"26e358a3-26d0-403c-baba-35680a60e33d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" Mar 13 13:04:00.159373 master-0 kubenswrapper[28149]: I0313 13:04:00.159349 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26e358a3-26d0-403c-baba-35680a60e33d-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7\" (UID: \"26e358a3-26d0-403c-baba-35680a60e33d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" Mar 13 13:04:00.162877 master-0 kubenswrapper[28149]: I0313 13:04:00.162842 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg"] Mar 13 13:04:00.179081 master-0 kubenswrapper[28149]: I0313 13:04:00.179022 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccc7k\" (UniqueName: \"kubernetes.io/projected/26e358a3-26d0-403c-baba-35680a60e33d-kube-api-access-ccc7k\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7\" (UID: \"26e358a3-26d0-403c-baba-35680a60e33d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" Mar 13 13:04:00.357828 master-0 kubenswrapper[28149]: I0313 13:04:00.357708 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" Mar 13 13:04:00.734907 master-0 kubenswrapper[28149]: I0313 13:04:00.734549 28149 generic.go:334] "Generic (PLEG): container finished" podID="12a5c9b1-e543-474d-bb59-6e7d08ee878f" containerID="b4ffc69d2d551a6774bcbd9b4fe6277c18cda5b8b9189e80800404e16d0224a7" exitCode=0 Mar 13 13:04:00.738812 master-0 kubenswrapper[28149]: I0313 13:04:00.738751 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" event={"ID":"12a5c9b1-e543-474d-bb59-6e7d08ee878f","Type":"ContainerDied","Data":"b4ffc69d2d551a6774bcbd9b4fe6277c18cda5b8b9189e80800404e16d0224a7"} Mar 13 13:04:00.738914 master-0 kubenswrapper[28149]: I0313 13:04:00.738811 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" event={"ID":"12a5c9b1-e543-474d-bb59-6e7d08ee878f","Type":"ContainerStarted","Data":"2a572dc06700fd68f9680a3481f3a49448089c4dbbb8147350f317dd178524a9"} Mar 13 13:04:01.234717 master-0 kubenswrapper[28149]: W0313 13:04:01.234655 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26e358a3_26d0_403c_baba_35680a60e33d.slice/crio-4799ab6dc403447d9282d767bbf7affc9f5da4e259caf9e5c4682a3530bc9914 WatchSource:0}: Error finding container 4799ab6dc403447d9282d767bbf7affc9f5da4e259caf9e5c4682a3530bc9914: Status 404 returned error can't find the container with id 4799ab6dc403447d9282d767bbf7affc9f5da4e259caf9e5c4682a3530bc9914 Mar 13 13:04:01.249575 master-0 kubenswrapper[28149]: I0313 13:04:01.247938 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7"] Mar 13 13:04:01.290313 master-0 kubenswrapper[28149]: I0313 13:04:01.290275 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265"] Mar 13 13:04:01.292380 master-0 kubenswrapper[28149]: I0313 13:04:01.292357 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" Mar 13 13:04:01.319298 master-0 kubenswrapper[28149]: I0313 13:04:01.319254 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265"] Mar 13 13:04:01.445183 master-0 kubenswrapper[28149]: I0313 13:04:01.445124 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plxvs\" (UniqueName: \"kubernetes.io/projected/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-kube-api-access-plxvs\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265\" (UID: \"88cec94a-0b71-40c1-8c8f-e28b9b0c7880\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" Mar 13 13:04:01.445378 master-0 kubenswrapper[28149]: I0313 13:04:01.445195 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265\" (UID: \"88cec94a-0b71-40c1-8c8f-e28b9b0c7880\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" Mar 13 13:04:01.445378 master-0 kubenswrapper[28149]: I0313 13:04:01.445269 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265\" (UID: \"88cec94a-0b71-40c1-8c8f-e28b9b0c7880\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" Mar 13 13:04:01.546420 master-0 kubenswrapper[28149]: I0313 13:04:01.546354 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plxvs\" (UniqueName: \"kubernetes.io/projected/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-kube-api-access-plxvs\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265\" (UID: \"88cec94a-0b71-40c1-8c8f-e28b9b0c7880\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" Mar 13 13:04:01.546420 master-0 kubenswrapper[28149]: I0313 13:04:01.546424 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265\" (UID: \"88cec94a-0b71-40c1-8c8f-e28b9b0c7880\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" Mar 13 13:04:01.546751 master-0 kubenswrapper[28149]: I0313 13:04:01.546527 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265\" (UID: \"88cec94a-0b71-40c1-8c8f-e28b9b0c7880\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" Mar 13 13:04:01.547162 master-0 kubenswrapper[28149]: I0313 13:04:01.547118 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265\" (UID: \"88cec94a-0b71-40c1-8c8f-e28b9b0c7880\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" Mar 13 13:04:01.547329 master-0 kubenswrapper[28149]: I0313 13:04:01.547279 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265\" (UID: \"88cec94a-0b71-40c1-8c8f-e28b9b0c7880\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" Mar 13 13:04:01.560828 master-0 kubenswrapper[28149]: I0313 13:04:01.560785 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plxvs\" (UniqueName: \"kubernetes.io/projected/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-kube-api-access-plxvs\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265\" (UID: \"88cec94a-0b71-40c1-8c8f-e28b9b0c7880\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" Mar 13 13:04:01.699162 master-0 kubenswrapper[28149]: I0313 13:04:01.699087 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" Mar 13 13:04:01.745266 master-0 kubenswrapper[28149]: I0313 13:04:01.745205 28149 generic.go:334] "Generic (PLEG): container finished" podID="26e358a3-26d0-403c-baba-35680a60e33d" containerID="1f6253f1fac547528c1b80c5fd66c4e1539d5bffc866ac233f63ad9b25da8770" exitCode=0 Mar 13 13:04:01.745266 master-0 kubenswrapper[28149]: I0313 13:04:01.745267 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" event={"ID":"26e358a3-26d0-403c-baba-35680a60e33d","Type":"ContainerDied","Data":"1f6253f1fac547528c1b80c5fd66c4e1539d5bffc866ac233f63ad9b25da8770"} Mar 13 13:04:01.745560 master-0 kubenswrapper[28149]: I0313 13:04:01.745297 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" event={"ID":"26e358a3-26d0-403c-baba-35680a60e33d","Type":"ContainerStarted","Data":"4799ab6dc403447d9282d767bbf7affc9f5da4e259caf9e5c4682a3530bc9914"} Mar 13 13:04:02.221164 master-0 kubenswrapper[28149]: W0313 13:04:02.221078 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88cec94a_0b71_40c1_8c8f_e28b9b0c7880.slice/crio-b46e95046551a91c33557d7cdc7e08ef5d98aca374875e426a215cc2a0aedd4e WatchSource:0}: Error finding container b46e95046551a91c33557d7cdc7e08ef5d98aca374875e426a215cc2a0aedd4e: Status 404 returned error can't find the container with id b46e95046551a91c33557d7cdc7e08ef5d98aca374875e426a215cc2a0aedd4e Mar 13 13:04:02.224422 master-0 kubenswrapper[28149]: I0313 13:04:02.224381 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265"] Mar 13 13:04:02.756318 master-0 kubenswrapper[28149]: I0313 13:04:02.756111 28149 generic.go:334] "Generic (PLEG): container finished" podID="12a5c9b1-e543-474d-bb59-6e7d08ee878f" containerID="f826c817acc200c92d0fa244348c67146689abb21775e45fbca4399095ac0656" exitCode=0 Mar 13 13:04:02.756318 master-0 kubenswrapper[28149]: I0313 13:04:02.756171 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" event={"ID":"12a5c9b1-e543-474d-bb59-6e7d08ee878f","Type":"ContainerDied","Data":"f826c817acc200c92d0fa244348c67146689abb21775e45fbca4399095ac0656"} Mar 13 13:04:02.758891 master-0 kubenswrapper[28149]: I0313 13:04:02.758815 28149 generic.go:334] "Generic (PLEG): container finished" podID="88cec94a-0b71-40c1-8c8f-e28b9b0c7880" containerID="d600cc2ab4c65baac8b9d7d8638ab35d2f460f27e73a0ae25423e6b92bbd8495" exitCode=0 Mar 13 13:04:02.758891 master-0 kubenswrapper[28149]: I0313 13:04:02.758847 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" event={"ID":"88cec94a-0b71-40c1-8c8f-e28b9b0c7880","Type":"ContainerDied","Data":"d600cc2ab4c65baac8b9d7d8638ab35d2f460f27e73a0ae25423e6b92bbd8495"} Mar 13 13:04:02.758891 master-0 kubenswrapper[28149]: I0313 13:04:02.758866 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" event={"ID":"88cec94a-0b71-40c1-8c8f-e28b9b0c7880","Type":"ContainerStarted","Data":"b46e95046551a91c33557d7cdc7e08ef5d98aca374875e426a215cc2a0aedd4e"} Mar 13 13:04:03.770448 master-0 kubenswrapper[28149]: I0313 13:04:03.770407 28149 generic.go:334] "Generic (PLEG): container finished" podID="12a5c9b1-e543-474d-bb59-6e7d08ee878f" containerID="05c745ad98cc171989dd7e2ee6106b8e1432af9e05d167fd5824c80871308ef7" exitCode=0 Mar 13 13:04:03.771181 master-0 kubenswrapper[28149]: I0313 13:04:03.770462 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" event={"ID":"12a5c9b1-e543-474d-bb59-6e7d08ee878f","Type":"ContainerDied","Data":"05c745ad98cc171989dd7e2ee6106b8e1432af9e05d167fd5824c80871308ef7"} Mar 13 13:04:04.779368 master-0 kubenswrapper[28149]: I0313 13:04:04.779229 28149 generic.go:334] "Generic (PLEG): container finished" podID="26e358a3-26d0-403c-baba-35680a60e33d" containerID="904d8038cfe570dc86d48607d3176e95605d685674f22f60d1931ef24409a69b" exitCode=0 Mar 13 13:04:04.779968 master-0 kubenswrapper[28149]: I0313 13:04:04.779392 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" event={"ID":"26e358a3-26d0-403c-baba-35680a60e33d","Type":"ContainerDied","Data":"904d8038cfe570dc86d48607d3176e95605d685674f22f60d1931ef24409a69b"} Mar 13 13:04:06.000785 master-0 kubenswrapper[28149]: I0313 13:04:06.000733 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" Mar 13 13:04:06.182648 master-0 kubenswrapper[28149]: I0313 13:04:06.182588 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12a5c9b1-e543-474d-bb59-6e7d08ee878f-bundle\") pod \"12a5c9b1-e543-474d-bb59-6e7d08ee878f\" (UID: \"12a5c9b1-e543-474d-bb59-6e7d08ee878f\") " Mar 13 13:04:06.182958 master-0 kubenswrapper[28149]: I0313 13:04:06.182685 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12a5c9b1-e543-474d-bb59-6e7d08ee878f-util\") pod \"12a5c9b1-e543-474d-bb59-6e7d08ee878f\" (UID: \"12a5c9b1-e543-474d-bb59-6e7d08ee878f\") " Mar 13 13:04:06.182958 master-0 kubenswrapper[28149]: I0313 13:04:06.182719 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dvq4\" (UniqueName: \"kubernetes.io/projected/12a5c9b1-e543-474d-bb59-6e7d08ee878f-kube-api-access-9dvq4\") pod \"12a5c9b1-e543-474d-bb59-6e7d08ee878f\" (UID: \"12a5c9b1-e543-474d-bb59-6e7d08ee878f\") " Mar 13 13:04:06.184429 master-0 kubenswrapper[28149]: I0313 13:04:06.184361 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12a5c9b1-e543-474d-bb59-6e7d08ee878f-bundle" (OuterVolumeSpecName: "bundle") pod "12a5c9b1-e543-474d-bb59-6e7d08ee878f" (UID: "12a5c9b1-e543-474d-bb59-6e7d08ee878f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:04:06.188560 master-0 kubenswrapper[28149]: I0313 13:04:06.188507 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12a5c9b1-e543-474d-bb59-6e7d08ee878f-util" (OuterVolumeSpecName: "util") pod "12a5c9b1-e543-474d-bb59-6e7d08ee878f" (UID: "12a5c9b1-e543-474d-bb59-6e7d08ee878f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:04:06.204311 master-0 kubenswrapper[28149]: I0313 13:04:06.203206 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a5c9b1-e543-474d-bb59-6e7d08ee878f-kube-api-access-9dvq4" (OuterVolumeSpecName: "kube-api-access-9dvq4") pod "12a5c9b1-e543-474d-bb59-6e7d08ee878f" (UID: "12a5c9b1-e543-474d-bb59-6e7d08ee878f"). InnerVolumeSpecName "kube-api-access-9dvq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:04:06.285286 master-0 kubenswrapper[28149]: I0313 13:04:06.285214 28149 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12a5c9b1-e543-474d-bb59-6e7d08ee878f-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:04:06.285286 master-0 kubenswrapper[28149]: I0313 13:04:06.285256 28149 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12a5c9b1-e543-474d-bb59-6e7d08ee878f-util\") on node \"master-0\" DevicePath \"\"" Mar 13 13:04:06.285286 master-0 kubenswrapper[28149]: I0313 13:04:06.285267 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dvq4\" (UniqueName: \"kubernetes.io/projected/12a5c9b1-e543-474d-bb59-6e7d08ee878f-kube-api-access-9dvq4\") on node \"master-0\" DevicePath \"\"" Mar 13 13:04:06.788866 master-0 kubenswrapper[28149]: I0313 13:04:06.788799 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn"] Mar 13 13:04:06.789194 master-0 kubenswrapper[28149]: E0313 13:04:06.789169 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a5c9b1-e543-474d-bb59-6e7d08ee878f" containerName="pull" Mar 13 13:04:06.789194 master-0 kubenswrapper[28149]: I0313 13:04:06.789187 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a5c9b1-e543-474d-bb59-6e7d08ee878f" containerName="pull" Mar 13 13:04:06.789273 master-0 kubenswrapper[28149]: E0313 13:04:06.789199 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a5c9b1-e543-474d-bb59-6e7d08ee878f" containerName="util" Mar 13 13:04:06.789273 master-0 kubenswrapper[28149]: I0313 13:04:06.789206 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a5c9b1-e543-474d-bb59-6e7d08ee878f" containerName="util" Mar 13 13:04:06.789273 master-0 kubenswrapper[28149]: E0313 13:04:06.789243 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a5c9b1-e543-474d-bb59-6e7d08ee878f" containerName="extract" Mar 13 13:04:06.789273 master-0 kubenswrapper[28149]: I0313 13:04:06.789249 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a5c9b1-e543-474d-bb59-6e7d08ee878f" containerName="extract" Mar 13 13:04:06.789401 master-0 kubenswrapper[28149]: I0313 13:04:06.789378 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a5c9b1-e543-474d-bb59-6e7d08ee878f" containerName="extract" Mar 13 13:04:06.790559 master-0 kubenswrapper[28149]: I0313 13:04:06.790528 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" Mar 13 13:04:06.793197 master-0 kubenswrapper[28149]: I0313 13:04:06.792442 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn\" (UID: \"a6d0a6fa-974f-4e67-af72-9813fafc0d8e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" Mar 13 13:04:06.793197 master-0 kubenswrapper[28149]: I0313 13:04:06.792591 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lms7\" (UniqueName: \"kubernetes.io/projected/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-kube-api-access-8lms7\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn\" (UID: \"a6d0a6fa-974f-4e67-af72-9813fafc0d8e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" Mar 13 13:04:06.793197 master-0 kubenswrapper[28149]: I0313 13:04:06.792635 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn\" (UID: \"a6d0a6fa-974f-4e67-af72-9813fafc0d8e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" Mar 13 13:04:06.794392 master-0 kubenswrapper[28149]: I0313 13:04:06.794339 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn"] Mar 13 13:04:06.810243 master-0 kubenswrapper[28149]: I0313 13:04:06.810183 28149 generic.go:334] "Generic (PLEG): container finished" podID="26e358a3-26d0-403c-baba-35680a60e33d" containerID="779c6c1c2b444b6869e1fab7cb49c11ab464b5437b3bdb495d2ed6e37600d42b" exitCode=0 Mar 13 13:04:06.810391 master-0 kubenswrapper[28149]: I0313 13:04:06.810275 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" event={"ID":"26e358a3-26d0-403c-baba-35680a60e33d","Type":"ContainerDied","Data":"779c6c1c2b444b6869e1fab7cb49c11ab464b5437b3bdb495d2ed6e37600d42b"} Mar 13 13:04:06.812983 master-0 kubenswrapper[28149]: I0313 13:04:06.812946 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" event={"ID":"12a5c9b1-e543-474d-bb59-6e7d08ee878f","Type":"ContainerDied","Data":"2a572dc06700fd68f9680a3481f3a49448089c4dbbb8147350f317dd178524a9"} Mar 13 13:04:06.813075 master-0 kubenswrapper[28149]: I0313 13:04:06.812982 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a572dc06700fd68f9680a3481f3a49448089c4dbbb8147350f317dd178524a9" Mar 13 13:04:06.813345 master-0 kubenswrapper[28149]: I0313 13:04:06.813300 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jftkg" Mar 13 13:04:06.818404 master-0 kubenswrapper[28149]: I0313 13:04:06.817334 28149 generic.go:334] "Generic (PLEG): container finished" podID="88cec94a-0b71-40c1-8c8f-e28b9b0c7880" containerID="9e8f4da259e01b7c9c55596ff8ab408a145616b4e08bcf9dc7f3d2cb7278a471" exitCode=0 Mar 13 13:04:06.818404 master-0 kubenswrapper[28149]: I0313 13:04:06.817423 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" event={"ID":"88cec94a-0b71-40c1-8c8f-e28b9b0c7880","Type":"ContainerDied","Data":"9e8f4da259e01b7c9c55596ff8ab408a145616b4e08bcf9dc7f3d2cb7278a471"} Mar 13 13:04:06.894202 master-0 kubenswrapper[28149]: I0313 13:04:06.894161 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn\" (UID: \"a6d0a6fa-974f-4e67-af72-9813fafc0d8e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" Mar 13 13:04:06.894610 master-0 kubenswrapper[28149]: I0313 13:04:06.894575 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn\" (UID: \"a6d0a6fa-974f-4e67-af72-9813fafc0d8e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" Mar 13 13:04:06.894697 master-0 kubenswrapper[28149]: I0313 13:04:06.894680 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lms7\" (UniqueName: \"kubernetes.io/projected/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-kube-api-access-8lms7\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn\" (UID: \"a6d0a6fa-974f-4e67-af72-9813fafc0d8e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" Mar 13 13:04:06.894803 master-0 kubenswrapper[28149]: I0313 13:04:06.894785 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn\" (UID: \"a6d0a6fa-974f-4e67-af72-9813fafc0d8e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" Mar 13 13:04:06.895095 master-0 kubenswrapper[28149]: I0313 13:04:06.895073 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn\" (UID: \"a6d0a6fa-974f-4e67-af72-9813fafc0d8e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" Mar 13 13:04:06.912829 master-0 kubenswrapper[28149]: I0313 13:04:06.912783 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lms7\" (UniqueName: \"kubernetes.io/projected/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-kube-api-access-8lms7\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn\" (UID: \"a6d0a6fa-974f-4e67-af72-9813fafc0d8e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" Mar 13 13:04:07.111999 master-0 kubenswrapper[28149]: I0313 13:04:07.111946 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" Mar 13 13:04:07.530343 master-0 kubenswrapper[28149]: I0313 13:04:07.530229 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn"] Mar 13 13:04:07.537812 master-0 kubenswrapper[28149]: W0313 13:04:07.537753 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6d0a6fa_974f_4e67_af72_9813fafc0d8e.slice/crio-852c53d53afa93be885e01208968e0a3d6aa500103b78aa0afb98968e1ec97af WatchSource:0}: Error finding container 852c53d53afa93be885e01208968e0a3d6aa500103b78aa0afb98968e1ec97af: Status 404 returned error can't find the container with id 852c53d53afa93be885e01208968e0a3d6aa500103b78aa0afb98968e1ec97af Mar 13 13:04:07.826071 master-0 kubenswrapper[28149]: I0313 13:04:07.825996 28149 generic.go:334] "Generic (PLEG): container finished" podID="a6d0a6fa-974f-4e67-af72-9813fafc0d8e" containerID="d5f30530e935c02065ccb92e746e7f363bd9dc4feb4b32bf14b8b197634df5ed" exitCode=0 Mar 13 13:04:07.826071 master-0 kubenswrapper[28149]: I0313 13:04:07.826079 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" event={"ID":"a6d0a6fa-974f-4e67-af72-9813fafc0d8e","Type":"ContainerDied","Data":"d5f30530e935c02065ccb92e746e7f363bd9dc4feb4b32bf14b8b197634df5ed"} Mar 13 13:04:07.826383 master-0 kubenswrapper[28149]: I0313 13:04:07.826112 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" event={"ID":"a6d0a6fa-974f-4e67-af72-9813fafc0d8e","Type":"ContainerStarted","Data":"852c53d53afa93be885e01208968e0a3d6aa500103b78aa0afb98968e1ec97af"} Mar 13 13:04:07.830946 master-0 kubenswrapper[28149]: I0313 13:04:07.830512 28149 generic.go:334] "Generic (PLEG): container finished" podID="88cec94a-0b71-40c1-8c8f-e28b9b0c7880" containerID="ea296ed248384dd2876ddcdef79a7471ef43e2b0f93e18306f83793233c822e1" exitCode=0 Mar 13 13:04:07.830946 master-0 kubenswrapper[28149]: I0313 13:04:07.830598 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" event={"ID":"88cec94a-0b71-40c1-8c8f-e28b9b0c7880","Type":"ContainerDied","Data":"ea296ed248384dd2876ddcdef79a7471ef43e2b0f93e18306f83793233c822e1"} Mar 13 13:04:08.153295 master-0 kubenswrapper[28149]: I0313 13:04:08.153258 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" Mar 13 13:04:08.322586 master-0 kubenswrapper[28149]: I0313 13:04:08.322517 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26e358a3-26d0-403c-baba-35680a60e33d-util\") pod \"26e358a3-26d0-403c-baba-35680a60e33d\" (UID: \"26e358a3-26d0-403c-baba-35680a60e33d\") " Mar 13 13:04:08.322910 master-0 kubenswrapper[28149]: I0313 13:04:08.322667 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26e358a3-26d0-403c-baba-35680a60e33d-bundle\") pod \"26e358a3-26d0-403c-baba-35680a60e33d\" (UID: \"26e358a3-26d0-403c-baba-35680a60e33d\") " Mar 13 13:04:08.322910 master-0 kubenswrapper[28149]: I0313 13:04:08.322828 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccc7k\" (UniqueName: \"kubernetes.io/projected/26e358a3-26d0-403c-baba-35680a60e33d-kube-api-access-ccc7k\") pod \"26e358a3-26d0-403c-baba-35680a60e33d\" (UID: \"26e358a3-26d0-403c-baba-35680a60e33d\") " Mar 13 13:04:08.323597 master-0 kubenswrapper[28149]: I0313 13:04:08.323556 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26e358a3-26d0-403c-baba-35680a60e33d-bundle" (OuterVolumeSpecName: "bundle") pod "26e358a3-26d0-403c-baba-35680a60e33d" (UID: "26e358a3-26d0-403c-baba-35680a60e33d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:04:08.325760 master-0 kubenswrapper[28149]: I0313 13:04:08.325711 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e358a3-26d0-403c-baba-35680a60e33d-kube-api-access-ccc7k" (OuterVolumeSpecName: "kube-api-access-ccc7k") pod "26e358a3-26d0-403c-baba-35680a60e33d" (UID: "26e358a3-26d0-403c-baba-35680a60e33d"). InnerVolumeSpecName "kube-api-access-ccc7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:04:08.334278 master-0 kubenswrapper[28149]: I0313 13:04:08.334192 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26e358a3-26d0-403c-baba-35680a60e33d-util" (OuterVolumeSpecName: "util") pod "26e358a3-26d0-403c-baba-35680a60e33d" (UID: "26e358a3-26d0-403c-baba-35680a60e33d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:04:08.426246 master-0 kubenswrapper[28149]: I0313 13:04:08.425679 28149 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26e358a3-26d0-403c-baba-35680a60e33d-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:04:08.426246 master-0 kubenswrapper[28149]: I0313 13:04:08.426021 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccc7k\" (UniqueName: \"kubernetes.io/projected/26e358a3-26d0-403c-baba-35680a60e33d-kube-api-access-ccc7k\") on node \"master-0\" DevicePath \"\"" Mar 13 13:04:08.426246 master-0 kubenswrapper[28149]: I0313 13:04:08.426044 28149 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26e358a3-26d0-403c-baba-35680a60e33d-util\") on node \"master-0\" DevicePath \"\"" Mar 13 13:04:08.841826 master-0 kubenswrapper[28149]: I0313 13:04:08.841750 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" event={"ID":"26e358a3-26d0-403c-baba-35680a60e33d","Type":"ContainerDied","Data":"4799ab6dc403447d9282d767bbf7affc9f5da4e259caf9e5c4682a3530bc9914"} Mar 13 13:04:08.841826 master-0 kubenswrapper[28149]: I0313 13:04:08.841800 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874xbgw7" Mar 13 13:04:08.842085 master-0 kubenswrapper[28149]: I0313 13:04:08.841854 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4799ab6dc403447d9282d767bbf7affc9f5da4e259caf9e5c4682a3530bc9914" Mar 13 13:04:09.156912 master-0 kubenswrapper[28149]: I0313 13:04:09.156854 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" Mar 13 13:04:09.340969 master-0 kubenswrapper[28149]: I0313 13:04:09.340880 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-util\") pod \"88cec94a-0b71-40c1-8c8f-e28b9b0c7880\" (UID: \"88cec94a-0b71-40c1-8c8f-e28b9b0c7880\") " Mar 13 13:04:09.341237 master-0 kubenswrapper[28149]: I0313 13:04:09.341212 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-bundle\") pod \"88cec94a-0b71-40c1-8c8f-e28b9b0c7880\" (UID: \"88cec94a-0b71-40c1-8c8f-e28b9b0c7880\") " Mar 13 13:04:09.341372 master-0 kubenswrapper[28149]: I0313 13:04:09.341335 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plxvs\" (UniqueName: \"kubernetes.io/projected/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-kube-api-access-plxvs\") pod \"88cec94a-0b71-40c1-8c8f-e28b9b0c7880\" (UID: \"88cec94a-0b71-40c1-8c8f-e28b9b0c7880\") " Mar 13 13:04:09.342265 master-0 kubenswrapper[28149]: I0313 13:04:09.342231 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-bundle" (OuterVolumeSpecName: "bundle") pod "88cec94a-0b71-40c1-8c8f-e28b9b0c7880" (UID: "88cec94a-0b71-40c1-8c8f-e28b9b0c7880"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:04:09.344402 master-0 kubenswrapper[28149]: I0313 13:04:09.344358 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-kube-api-access-plxvs" (OuterVolumeSpecName: "kube-api-access-plxvs") pod "88cec94a-0b71-40c1-8c8f-e28b9b0c7880" (UID: "88cec94a-0b71-40c1-8c8f-e28b9b0c7880"). InnerVolumeSpecName "kube-api-access-plxvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:04:09.351587 master-0 kubenswrapper[28149]: I0313 13:04:09.351512 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-util" (OuterVolumeSpecName: "util") pod "88cec94a-0b71-40c1-8c8f-e28b9b0c7880" (UID: "88cec94a-0b71-40c1-8c8f-e28b9b0c7880"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:04:09.442986 master-0 kubenswrapper[28149]: I0313 13:04:09.442912 28149 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:04:09.442986 master-0 kubenswrapper[28149]: I0313 13:04:09.442960 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plxvs\" (UniqueName: \"kubernetes.io/projected/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-kube-api-access-plxvs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:04:09.442986 master-0 kubenswrapper[28149]: I0313 13:04:09.442974 28149 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88cec94a-0b71-40c1-8c8f-e28b9b0c7880-util\") on node \"master-0\" DevicePath \"\"" Mar 13 13:04:09.851405 master-0 kubenswrapper[28149]: I0313 13:04:09.851319 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" event={"ID":"88cec94a-0b71-40c1-8c8f-e28b9b0c7880","Type":"ContainerDied","Data":"b46e95046551a91c33557d7cdc7e08ef5d98aca374875e426a215cc2a0aedd4e"} Mar 13 13:04:09.851405 master-0 kubenswrapper[28149]: I0313 13:04:09.851353 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tb265" Mar 13 13:04:09.851405 master-0 kubenswrapper[28149]: I0313 13:04:09.851372 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b46e95046551a91c33557d7cdc7e08ef5d98aca374875e426a215cc2a0aedd4e" Mar 13 13:04:09.853582 master-0 kubenswrapper[28149]: I0313 13:04:09.853558 28149 generic.go:334] "Generic (PLEG): container finished" podID="a6d0a6fa-974f-4e67-af72-9813fafc0d8e" containerID="306b2da98cdb1464ebeaf0a210770f518d19a3d78a95891ea33a312d61371725" exitCode=0 Mar 13 13:04:09.853683 master-0 kubenswrapper[28149]: I0313 13:04:09.853655 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" event={"ID":"a6d0a6fa-974f-4e67-af72-9813fafc0d8e","Type":"ContainerDied","Data":"306b2da98cdb1464ebeaf0a210770f518d19a3d78a95891ea33a312d61371725"} Mar 13 13:04:10.863839 master-0 kubenswrapper[28149]: I0313 13:04:10.863797 28149 generic.go:334] "Generic (PLEG): container finished" podID="a6d0a6fa-974f-4e67-af72-9813fafc0d8e" containerID="f73939f7af860ba74937edd62eb4a2b195448b57aa58b21b557025cf54f692d7" exitCode=0 Mar 13 13:04:10.863839 master-0 kubenswrapper[28149]: I0313 13:04:10.863844 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" event={"ID":"a6d0a6fa-974f-4e67-af72-9813fafc0d8e","Type":"ContainerDied","Data":"f73939f7af860ba74937edd62eb4a2b195448b57aa58b21b557025cf54f692d7"} Mar 13 13:04:12.202780 master-0 kubenswrapper[28149]: I0313 13:04:12.202707 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" Mar 13 13:04:12.297428 master-0 kubenswrapper[28149]: I0313 13:04:12.297305 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-util\") pod \"a6d0a6fa-974f-4e67-af72-9813fafc0d8e\" (UID: \"a6d0a6fa-974f-4e67-af72-9813fafc0d8e\") " Mar 13 13:04:12.297428 master-0 kubenswrapper[28149]: I0313 13:04:12.297443 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lms7\" (UniqueName: \"kubernetes.io/projected/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-kube-api-access-8lms7\") pod \"a6d0a6fa-974f-4e67-af72-9813fafc0d8e\" (UID: \"a6d0a6fa-974f-4e67-af72-9813fafc0d8e\") " Mar 13 13:04:12.297960 master-0 kubenswrapper[28149]: I0313 13:04:12.297927 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-bundle\") pod \"a6d0a6fa-974f-4e67-af72-9813fafc0d8e\" (UID: \"a6d0a6fa-974f-4e67-af72-9813fafc0d8e\") " Mar 13 13:04:12.301872 master-0 kubenswrapper[28149]: I0313 13:04:12.301820 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-kube-api-access-8lms7" (OuterVolumeSpecName: "kube-api-access-8lms7") pod "a6d0a6fa-974f-4e67-af72-9813fafc0d8e" (UID: "a6d0a6fa-974f-4e67-af72-9813fafc0d8e"). InnerVolumeSpecName "kube-api-access-8lms7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:04:12.399905 master-0 kubenswrapper[28149]: I0313 13:04:12.399805 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lms7\" (UniqueName: \"kubernetes.io/projected/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-kube-api-access-8lms7\") on node \"master-0\" DevicePath \"\"" Mar 13 13:04:12.866387 master-0 kubenswrapper[28149]: I0313 13:04:12.866312 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-bundle" (OuterVolumeSpecName: "bundle") pod "a6d0a6fa-974f-4e67-af72-9813fafc0d8e" (UID: "a6d0a6fa-974f-4e67-af72-9813fafc0d8e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:04:12.883578 master-0 kubenswrapper[28149]: I0313 13:04:12.883507 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" event={"ID":"a6d0a6fa-974f-4e67-af72-9813fafc0d8e","Type":"ContainerDied","Data":"852c53d53afa93be885e01208968e0a3d6aa500103b78aa0afb98968e1ec97af"} Mar 13 13:04:12.883578 master-0 kubenswrapper[28149]: I0313 13:04:12.883555 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="852c53d53afa93be885e01208968e0a3d6aa500103b78aa0afb98968e1ec97af" Mar 13 13:04:12.883910 master-0 kubenswrapper[28149]: I0313 13:04:12.883616 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqftn" Mar 13 13:04:12.909389 master-0 kubenswrapper[28149]: I0313 13:04:12.909335 28149 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:04:13.410384 master-0 kubenswrapper[28149]: I0313 13:04:13.410295 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-util" (OuterVolumeSpecName: "util") pod "a6d0a6fa-974f-4e67-af72-9813fafc0d8e" (UID: "a6d0a6fa-974f-4e67-af72-9813fafc0d8e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:04:13.418362 master-0 kubenswrapper[28149]: I0313 13:04:13.418279 28149 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a6fa-974f-4e67-af72-9813fafc0d8e-util\") on node \"master-0\" DevicePath \"\"" Mar 13 13:04:18.676571 master-0 kubenswrapper[28149]: I0313 13:04:18.676501 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-qbrf5"] Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: E0313 13:04:18.676814 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88cec94a-0b71-40c1-8c8f-e28b9b0c7880" containerName="extract" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: I0313 13:04:18.676827 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="88cec94a-0b71-40c1-8c8f-e28b9b0c7880" containerName="extract" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: E0313 13:04:18.676844 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e358a3-26d0-403c-baba-35680a60e33d" containerName="pull" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: I0313 13:04:18.676852 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e358a3-26d0-403c-baba-35680a60e33d" containerName="pull" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: E0313 13:04:18.676874 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88cec94a-0b71-40c1-8c8f-e28b9b0c7880" containerName="pull" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: I0313 13:04:18.676881 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="88cec94a-0b71-40c1-8c8f-e28b9b0c7880" containerName="pull" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: E0313 13:04:18.676890 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e358a3-26d0-403c-baba-35680a60e33d" containerName="util" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: I0313 13:04:18.676895 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e358a3-26d0-403c-baba-35680a60e33d" containerName="util" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: E0313 13:04:18.676906 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e358a3-26d0-403c-baba-35680a60e33d" containerName="extract" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: I0313 13:04:18.676911 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e358a3-26d0-403c-baba-35680a60e33d" containerName="extract" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: E0313 13:04:18.676924 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88cec94a-0b71-40c1-8c8f-e28b9b0c7880" containerName="util" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: I0313 13:04:18.676930 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="88cec94a-0b71-40c1-8c8f-e28b9b0c7880" containerName="util" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: E0313 13:04:18.676935 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d0a6fa-974f-4e67-af72-9813fafc0d8e" containerName="pull" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: I0313 13:04:18.676942 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d0a6fa-974f-4e67-af72-9813fafc0d8e" containerName="pull" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: E0313 13:04:18.676950 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d0a6fa-974f-4e67-af72-9813fafc0d8e" containerName="extract" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: I0313 13:04:18.676956 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d0a6fa-974f-4e67-af72-9813fafc0d8e" containerName="extract" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: E0313 13:04:18.676967 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d0a6fa-974f-4e67-af72-9813fafc0d8e" containerName="util" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: I0313 13:04:18.676973 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d0a6fa-974f-4e67-af72-9813fafc0d8e" containerName="util" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: I0313 13:04:18.677115 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="88cec94a-0b71-40c1-8c8f-e28b9b0c7880" containerName="extract" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: I0313 13:04:18.677125 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d0a6fa-974f-4e67-af72-9813fafc0d8e" containerName="extract" Mar 13 13:04:18.677252 master-0 kubenswrapper[28149]: I0313 13:04:18.677165 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e358a3-26d0-403c-baba-35680a60e33d" containerName="extract" Mar 13 13:04:18.678190 master-0 kubenswrapper[28149]: I0313 13:04:18.677654 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-qbrf5" Mar 13 13:04:18.679715 master-0 kubenswrapper[28149]: I0313 13:04:18.679673 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 13 13:04:18.681352 master-0 kubenswrapper[28149]: I0313 13:04:18.681324 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 13 13:04:18.721864 master-0 kubenswrapper[28149]: I0313 13:04:18.721792 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-qbrf5"] Mar 13 13:04:18.803053 master-0 kubenswrapper[28149]: I0313 13:04:18.802957 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5cpv\" (UniqueName: \"kubernetes.io/projected/b2db97c2-d0cc-460b-8b51-e13cb9593c68-kube-api-access-m5cpv\") pod \"nmstate-operator-796d4cfff4-qbrf5\" (UID: \"b2db97c2-d0cc-460b-8b51-e13cb9593c68\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-qbrf5" Mar 13 13:04:18.904712 master-0 kubenswrapper[28149]: I0313 13:04:18.904644 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5cpv\" (UniqueName: \"kubernetes.io/projected/b2db97c2-d0cc-460b-8b51-e13cb9593c68-kube-api-access-m5cpv\") pod \"nmstate-operator-796d4cfff4-qbrf5\" (UID: \"b2db97c2-d0cc-460b-8b51-e13cb9593c68\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-qbrf5" Mar 13 13:04:18.928776 master-0 kubenswrapper[28149]: I0313 13:04:18.928675 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5cpv\" (UniqueName: \"kubernetes.io/projected/b2db97c2-d0cc-460b-8b51-e13cb9593c68-kube-api-access-m5cpv\") pod \"nmstate-operator-796d4cfff4-qbrf5\" (UID: \"b2db97c2-d0cc-460b-8b51-e13cb9593c68\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-qbrf5" Mar 13 13:04:18.993400 master-0 kubenswrapper[28149]: I0313 13:04:18.993325 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-qbrf5" Mar 13 13:04:19.498847 master-0 kubenswrapper[28149]: W0313 13:04:19.498791 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2db97c2_d0cc_460b_8b51_e13cb9593c68.slice/crio-d6f2a9fef9985cc402adc8d1734165bb0a93a39327036d676827b86ae269a7ab WatchSource:0}: Error finding container d6f2a9fef9985cc402adc8d1734165bb0a93a39327036d676827b86ae269a7ab: Status 404 returned error can't find the container with id d6f2a9fef9985cc402adc8d1734165bb0a93a39327036d676827b86ae269a7ab Mar 13 13:04:19.499964 master-0 kubenswrapper[28149]: I0313 13:04:19.499922 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-qbrf5"] Mar 13 13:04:19.935878 master-0 kubenswrapper[28149]: I0313 13:04:19.935807 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-qbrf5" event={"ID":"b2db97c2-d0cc-460b-8b51-e13cb9593c68","Type":"ContainerStarted","Data":"d6f2a9fef9985cc402adc8d1734165bb0a93a39327036d676827b86ae269a7ab"} Mar 13 13:04:22.973190 master-0 kubenswrapper[28149]: I0313 13:04:22.972986 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-qbrf5" event={"ID":"b2db97c2-d0cc-460b-8b51-e13cb9593c68","Type":"ContainerStarted","Data":"47c8346fffd637379ad0ab53f8362d549344b72d9a4ab9217606e37395d37a36"} Mar 13 13:04:23.027166 master-0 kubenswrapper[28149]: I0313 13:04:23.021166 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-qbrf5" podStartSLOduration=2.230948598 podStartE2EDuration="5.021113426s" podCreationTimestamp="2026-03-13 13:04:18 +0000 UTC" firstStartedPulling="2026-03-13 13:04:19.50110901 +0000 UTC m=+633.154574169" lastFinishedPulling="2026-03-13 13:04:22.291273838 +0000 UTC m=+635.944738997" observedRunningTime="2026-03-13 13:04:23.003297164 +0000 UTC m=+636.656762323" watchObservedRunningTime="2026-03-13 13:04:23.021113426 +0000 UTC m=+636.674578595" Mar 13 13:04:24.407386 master-0 kubenswrapper[28149]: I0313 13:04:24.407303 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4"] Mar 13 13:04:24.412166 master-0 kubenswrapper[28149]: I0313 13:04:24.409731 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" Mar 13 13:04:24.413341 master-0 kubenswrapper[28149]: I0313 13:04:24.412572 28149 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 13 13:04:24.413341 master-0 kubenswrapper[28149]: I0313 13:04:24.412840 28149 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 13 13:04:24.414685 master-0 kubenswrapper[28149]: I0313 13:04:24.414665 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 13 13:04:24.419050 master-0 kubenswrapper[28149]: I0313 13:04:24.419014 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 13 13:04:24.435998 master-0 kubenswrapper[28149]: I0313 13:04:24.434779 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4"] Mar 13 13:04:24.479607 master-0 kubenswrapper[28149]: I0313 13:04:24.479510 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wflmg\" (UniqueName: \"kubernetes.io/projected/d2de1d43-9ee6-4d7e-a371-6f9a7de0047c-kube-api-access-wflmg\") pod \"metallb-operator-controller-manager-67c6bd779f-5djh4\" (UID: \"d2de1d43-9ee6-4d7e-a371-6f9a7de0047c\") " pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" Mar 13 13:04:24.479607 master-0 kubenswrapper[28149]: I0313 13:04:24.479588 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d2de1d43-9ee6-4d7e-a371-6f9a7de0047c-apiservice-cert\") pod \"metallb-operator-controller-manager-67c6bd779f-5djh4\" (UID: \"d2de1d43-9ee6-4d7e-a371-6f9a7de0047c\") " pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" Mar 13 13:04:24.479919 master-0 kubenswrapper[28149]: I0313 13:04:24.479649 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d2de1d43-9ee6-4d7e-a371-6f9a7de0047c-webhook-cert\") pod \"metallb-operator-controller-manager-67c6bd779f-5djh4\" (UID: \"d2de1d43-9ee6-4d7e-a371-6f9a7de0047c\") " pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" Mar 13 13:04:24.735091 master-0 kubenswrapper[28149]: I0313 13:04:24.734968 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wflmg\" (UniqueName: \"kubernetes.io/projected/d2de1d43-9ee6-4d7e-a371-6f9a7de0047c-kube-api-access-wflmg\") pod \"metallb-operator-controller-manager-67c6bd779f-5djh4\" (UID: \"d2de1d43-9ee6-4d7e-a371-6f9a7de0047c\") " pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" Mar 13 13:04:24.735091 master-0 kubenswrapper[28149]: I0313 13:04:24.735013 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d2de1d43-9ee6-4d7e-a371-6f9a7de0047c-apiservice-cert\") pod \"metallb-operator-controller-manager-67c6bd779f-5djh4\" (UID: \"d2de1d43-9ee6-4d7e-a371-6f9a7de0047c\") " pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" Mar 13 13:04:24.735091 master-0 kubenswrapper[28149]: I0313 13:04:24.735061 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d2de1d43-9ee6-4d7e-a371-6f9a7de0047c-webhook-cert\") pod \"metallb-operator-controller-manager-67c6bd779f-5djh4\" (UID: \"d2de1d43-9ee6-4d7e-a371-6f9a7de0047c\") " pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" Mar 13 13:04:24.744215 master-0 kubenswrapper[28149]: I0313 13:04:24.742042 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d2de1d43-9ee6-4d7e-a371-6f9a7de0047c-webhook-cert\") pod \"metallb-operator-controller-manager-67c6bd779f-5djh4\" (UID: \"d2de1d43-9ee6-4d7e-a371-6f9a7de0047c\") " pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" Mar 13 13:04:24.748218 master-0 kubenswrapper[28149]: I0313 13:04:24.746218 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d2de1d43-9ee6-4d7e-a371-6f9a7de0047c-apiservice-cert\") pod \"metallb-operator-controller-manager-67c6bd779f-5djh4\" (UID: \"d2de1d43-9ee6-4d7e-a371-6f9a7de0047c\") " pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" Mar 13 13:04:24.820021 master-0 kubenswrapper[28149]: I0313 13:04:24.819969 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wflmg\" (UniqueName: \"kubernetes.io/projected/d2de1d43-9ee6-4d7e-a371-6f9a7de0047c-kube-api-access-wflmg\") pod \"metallb-operator-controller-manager-67c6bd779f-5djh4\" (UID: \"d2de1d43-9ee6-4d7e-a371-6f9a7de0047c\") " pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" Mar 13 13:04:25.171479 master-0 kubenswrapper[28149]: I0313 13:04:25.171452 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" Mar 13 13:04:25.475367 master-0 kubenswrapper[28149]: I0313 13:04:25.475234 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb"] Mar 13 13:04:25.476460 master-0 kubenswrapper[28149]: I0313 13:04:25.476424 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" Mar 13 13:04:25.507216 master-0 kubenswrapper[28149]: I0313 13:04:25.485110 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eee25312-b8a7-43f4-9ec9-96c1fadd4960-webhook-cert\") pod \"metallb-operator-webhook-server-b9b5ddc8d-wj5zb\" (UID: \"eee25312-b8a7-43f4-9ec9-96c1fadd4960\") " pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" Mar 13 13:04:25.507216 master-0 kubenswrapper[28149]: I0313 13:04:25.485276 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eee25312-b8a7-43f4-9ec9-96c1fadd4960-apiservice-cert\") pod \"metallb-operator-webhook-server-b9b5ddc8d-wj5zb\" (UID: \"eee25312-b8a7-43f4-9ec9-96c1fadd4960\") " pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" Mar 13 13:04:25.507216 master-0 kubenswrapper[28149]: I0313 13:04:25.485308 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b55p5\" (UniqueName: \"kubernetes.io/projected/eee25312-b8a7-43f4-9ec9-96c1fadd4960-kube-api-access-b55p5\") pod \"metallb-operator-webhook-server-b9b5ddc8d-wj5zb\" (UID: \"eee25312-b8a7-43f4-9ec9-96c1fadd4960\") " pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" Mar 13 13:04:25.507216 master-0 kubenswrapper[28149]: I0313 13:04:25.485548 28149 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 13 13:04:25.507216 master-0 kubenswrapper[28149]: I0313 13:04:25.485746 28149 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 13 13:04:25.546208 master-0 kubenswrapper[28149]: I0313 13:04:25.536373 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb"] Mar 13 13:04:25.588509 master-0 kubenswrapper[28149]: I0313 13:04:25.587383 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eee25312-b8a7-43f4-9ec9-96c1fadd4960-apiservice-cert\") pod \"metallb-operator-webhook-server-b9b5ddc8d-wj5zb\" (UID: \"eee25312-b8a7-43f4-9ec9-96c1fadd4960\") " pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" Mar 13 13:04:25.588509 master-0 kubenswrapper[28149]: I0313 13:04:25.587453 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b55p5\" (UniqueName: \"kubernetes.io/projected/eee25312-b8a7-43f4-9ec9-96c1fadd4960-kube-api-access-b55p5\") pod \"metallb-operator-webhook-server-b9b5ddc8d-wj5zb\" (UID: \"eee25312-b8a7-43f4-9ec9-96c1fadd4960\") " pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" Mar 13 13:04:25.588509 master-0 kubenswrapper[28149]: I0313 13:04:25.587515 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eee25312-b8a7-43f4-9ec9-96c1fadd4960-webhook-cert\") pod \"metallb-operator-webhook-server-b9b5ddc8d-wj5zb\" (UID: \"eee25312-b8a7-43f4-9ec9-96c1fadd4960\") " pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" Mar 13 13:04:25.592771 master-0 kubenswrapper[28149]: I0313 13:04:25.592731 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eee25312-b8a7-43f4-9ec9-96c1fadd4960-webhook-cert\") pod \"metallb-operator-webhook-server-b9b5ddc8d-wj5zb\" (UID: \"eee25312-b8a7-43f4-9ec9-96c1fadd4960\") " pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" Mar 13 13:04:25.593067 master-0 kubenswrapper[28149]: I0313 13:04:25.593028 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eee25312-b8a7-43f4-9ec9-96c1fadd4960-apiservice-cert\") pod \"metallb-operator-webhook-server-b9b5ddc8d-wj5zb\" (UID: \"eee25312-b8a7-43f4-9ec9-96c1fadd4960\") " pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" Mar 13 13:04:25.612554 master-0 kubenswrapper[28149]: I0313 13:04:25.612497 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b55p5\" (UniqueName: \"kubernetes.io/projected/eee25312-b8a7-43f4-9ec9-96c1fadd4960-kube-api-access-b55p5\") pod \"metallb-operator-webhook-server-b9b5ddc8d-wj5zb\" (UID: \"eee25312-b8a7-43f4-9ec9-96c1fadd4960\") " pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" Mar 13 13:04:25.709130 master-0 kubenswrapper[28149]: I0313 13:04:25.708087 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4"] Mar 13 13:04:25.710303 master-0 kubenswrapper[28149]: W0313 13:04:25.710271 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2de1d43_9ee6_4d7e_a371_6f9a7de0047c.slice/crio-0d1302ad7bb3d03b6678d50c1d1f067fdfffc255d311a7d4c43d1e02ac06a3c0 WatchSource:0}: Error finding container 0d1302ad7bb3d03b6678d50c1d1f067fdfffc255d311a7d4c43d1e02ac06a3c0: Status 404 returned error can't find the container with id 0d1302ad7bb3d03b6678d50c1d1f067fdfffc255d311a7d4c43d1e02ac06a3c0 Mar 13 13:04:25.774261 master-0 kubenswrapper[28149]: I0313 13:04:25.771487 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-kn4tm"] Mar 13 13:04:25.774261 master-0 kubenswrapper[28149]: I0313 13:04:25.772779 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-kn4tm" Mar 13 13:04:25.777534 master-0 kubenswrapper[28149]: I0313 13:04:25.777495 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 13 13:04:25.778043 master-0 kubenswrapper[28149]: I0313 13:04:25.777782 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 13 13:04:25.794207 master-0 kubenswrapper[28149]: I0313 13:04:25.787968 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-kn4tm"] Mar 13 13:04:25.794650 master-0 kubenswrapper[28149]: I0313 13:04:25.788990 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d2eaa07-eebe-4b25-b2cf-1ac4e102f121-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-kn4tm\" (UID: \"8d2eaa07-eebe-4b25-b2cf-1ac4e102f121\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-kn4tm" Mar 13 13:04:25.794859 master-0 kubenswrapper[28149]: I0313 13:04:25.794841 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql9pf\" (UniqueName: \"kubernetes.io/projected/8d2eaa07-eebe-4b25-b2cf-1ac4e102f121-kube-api-access-ql9pf\") pod \"cert-manager-operator-controller-manager-66c8bdd694-kn4tm\" (UID: \"8d2eaa07-eebe-4b25-b2cf-1ac4e102f121\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-kn4tm" Mar 13 13:04:25.850458 master-0 kubenswrapper[28149]: I0313 13:04:25.850394 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" Mar 13 13:04:25.900137 master-0 kubenswrapper[28149]: I0313 13:04:25.900085 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d2eaa07-eebe-4b25-b2cf-1ac4e102f121-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-kn4tm\" (UID: \"8d2eaa07-eebe-4b25-b2cf-1ac4e102f121\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-kn4tm" Mar 13 13:04:25.900380 master-0 kubenswrapper[28149]: I0313 13:04:25.900201 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql9pf\" (UniqueName: \"kubernetes.io/projected/8d2eaa07-eebe-4b25-b2cf-1ac4e102f121-kube-api-access-ql9pf\") pod \"cert-manager-operator-controller-manager-66c8bdd694-kn4tm\" (UID: \"8d2eaa07-eebe-4b25-b2cf-1ac4e102f121\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-kn4tm" Mar 13 13:04:25.901342 master-0 kubenswrapper[28149]: I0313 13:04:25.901315 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d2eaa07-eebe-4b25-b2cf-1ac4e102f121-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-kn4tm\" (UID: \"8d2eaa07-eebe-4b25-b2cf-1ac4e102f121\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-kn4tm" Mar 13 13:04:25.940957 master-0 kubenswrapper[28149]: I0313 13:04:25.940740 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql9pf\" (UniqueName: \"kubernetes.io/projected/8d2eaa07-eebe-4b25-b2cf-1ac4e102f121-kube-api-access-ql9pf\") pod \"cert-manager-operator-controller-manager-66c8bdd694-kn4tm\" (UID: \"8d2eaa07-eebe-4b25-b2cf-1ac4e102f121\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-kn4tm" Mar 13 13:04:26.109174 master-0 kubenswrapper[28149]: I0313 13:04:26.109022 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-kn4tm" Mar 13 13:04:26.242317 master-0 kubenswrapper[28149]: I0313 13:04:26.235615 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" event={"ID":"d2de1d43-9ee6-4d7e-a371-6f9a7de0047c","Type":"ContainerStarted","Data":"0d1302ad7bb3d03b6678d50c1d1f067fdfffc255d311a7d4c43d1e02ac06a3c0"} Mar 13 13:04:26.519491 master-0 kubenswrapper[28149]: I0313 13:04:26.519453 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb"] Mar 13 13:04:26.620941 master-0 kubenswrapper[28149]: I0313 13:04:26.620839 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-kn4tm"] Mar 13 13:04:26.628306 master-0 kubenswrapper[28149]: W0313 13:04:26.628247 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d2eaa07_eebe_4b25_b2cf_1ac4e102f121.slice/crio-0a3c9fbf23207ceee7edfab62bc9cf30ba58872adf16430bd989a35817c89be2 WatchSource:0}: Error finding container 0a3c9fbf23207ceee7edfab62bc9cf30ba58872adf16430bd989a35817c89be2: Status 404 returned error can't find the container with id 0a3c9fbf23207ceee7edfab62bc9cf30ba58872adf16430bd989a35817c89be2 Mar 13 13:04:27.246080 master-0 kubenswrapper[28149]: I0313 13:04:27.246030 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-kn4tm" event={"ID":"8d2eaa07-eebe-4b25-b2cf-1ac4e102f121","Type":"ContainerStarted","Data":"0a3c9fbf23207ceee7edfab62bc9cf30ba58872adf16430bd989a35817c89be2"} Mar 13 13:04:27.247685 master-0 kubenswrapper[28149]: I0313 13:04:27.247657 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" event={"ID":"eee25312-b8a7-43f4-9ec9-96c1fadd4960","Type":"ContainerStarted","Data":"efa8a300191b05b8fe0dc6c9a0ce57a2554b1e1b0ec817faf97c19754e4a64a3"} Mar 13 13:04:37.454266 master-0 kubenswrapper[28149]: I0313 13:04:37.451437 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" event={"ID":"d2de1d43-9ee6-4d7e-a371-6f9a7de0047c","Type":"ContainerStarted","Data":"8da3ce70b887ef44bfd4883b4faf425137ae52673af3f578588225d14c2f3d28"} Mar 13 13:04:37.454266 master-0 kubenswrapper[28149]: I0313 13:04:37.452662 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" Mar 13 13:04:38.461583 master-0 kubenswrapper[28149]: I0313 13:04:38.461526 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-kn4tm" event={"ID":"8d2eaa07-eebe-4b25-b2cf-1ac4e102f121","Type":"ContainerStarted","Data":"f3522fad07abf6c3bac0423119e75030013c06f854c30402895aaee459f2f2ff"} Mar 13 13:04:38.463069 master-0 kubenswrapper[28149]: I0313 13:04:38.463037 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" event={"ID":"eee25312-b8a7-43f4-9ec9-96c1fadd4960","Type":"ContainerStarted","Data":"83ee0c2e82fb49c6647e55ddf2780c7d41e58c53603e2352a865aced7830a255"} Mar 13 13:04:38.463277 master-0 kubenswrapper[28149]: I0313 13:04:38.463233 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" Mar 13 13:04:39.426578 master-0 kubenswrapper[28149]: I0313 13:04:39.426477 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-kn4tm" podStartSLOduration=4.127389381 podStartE2EDuration="14.426449386s" podCreationTimestamp="2026-03-13 13:04:25 +0000 UTC" firstStartedPulling="2026-03-13 13:04:26.635470717 +0000 UTC m=+640.288935876" lastFinishedPulling="2026-03-13 13:04:36.934530722 +0000 UTC m=+650.587995881" observedRunningTime="2026-03-13 13:04:39.421329205 +0000 UTC m=+653.074794374" watchObservedRunningTime="2026-03-13 13:04:39.426449386 +0000 UTC m=+653.079914555" Mar 13 13:04:39.431351 master-0 kubenswrapper[28149]: I0313 13:04:39.431258 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" podStartSLOduration=4.229120593 podStartE2EDuration="15.431232058s" podCreationTimestamp="2026-03-13 13:04:24 +0000 UTC" firstStartedPulling="2026-03-13 13:04:25.714647406 +0000 UTC m=+639.368112565" lastFinishedPulling="2026-03-13 13:04:36.916758871 +0000 UTC m=+650.570224030" observedRunningTime="2026-03-13 13:04:37.7096169 +0000 UTC m=+651.363082069" watchObservedRunningTime="2026-03-13 13:04:39.431232058 +0000 UTC m=+653.084697217" Mar 13 13:04:39.488401 master-0 kubenswrapper[28149]: I0313 13:04:39.488314 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" podStartSLOduration=4.054451618 podStartE2EDuration="14.488288984s" podCreationTimestamp="2026-03-13 13:04:25 +0000 UTC" firstStartedPulling="2026-03-13 13:04:26.532366221 +0000 UTC m=+640.185831380" lastFinishedPulling="2026-03-13 13:04:36.966203587 +0000 UTC m=+650.619668746" observedRunningTime="2026-03-13 13:04:39.482224876 +0000 UTC m=+653.135690045" watchObservedRunningTime="2026-03-13 13:04:39.488288984 +0000 UTC m=+653.141754143" Mar 13 13:04:40.544956 master-0 kubenswrapper[28149]: I0313 13:04:40.544881 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-95wcl"] Mar 13 13:04:40.545908 master-0 kubenswrapper[28149]: I0313 13:04:40.545874 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-95wcl" Mar 13 13:04:40.547556 master-0 kubenswrapper[28149]: I0313 13:04:40.547521 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 13 13:04:40.547709 master-0 kubenswrapper[28149]: I0313 13:04:40.547666 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 13 13:04:40.572722 master-0 kubenswrapper[28149]: I0313 13:04:40.572657 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-95wcl"] Mar 13 13:04:40.643399 master-0 kubenswrapper[28149]: I0313 13:04:40.643323 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/281c5c01-2404-44c1-a270-e9b124dc5425-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-95wcl\" (UID: \"281c5c01-2404-44c1-a270-e9b124dc5425\") " pod="cert-manager/cert-manager-webhook-6888856db4-95wcl" Mar 13 13:04:40.643399 master-0 kubenswrapper[28149]: I0313 13:04:40.643409 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knct4\" (UniqueName: \"kubernetes.io/projected/281c5c01-2404-44c1-a270-e9b124dc5425-kube-api-access-knct4\") pod \"cert-manager-webhook-6888856db4-95wcl\" (UID: \"281c5c01-2404-44c1-a270-e9b124dc5425\") " pod="cert-manager/cert-manager-webhook-6888856db4-95wcl" Mar 13 13:04:40.744846 master-0 kubenswrapper[28149]: I0313 13:04:40.744771 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/281c5c01-2404-44c1-a270-e9b124dc5425-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-95wcl\" (UID: \"281c5c01-2404-44c1-a270-e9b124dc5425\") " pod="cert-manager/cert-manager-webhook-6888856db4-95wcl" Mar 13 13:04:40.744846 master-0 kubenswrapper[28149]: I0313 13:04:40.744825 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knct4\" (UniqueName: \"kubernetes.io/projected/281c5c01-2404-44c1-a270-e9b124dc5425-kube-api-access-knct4\") pod \"cert-manager-webhook-6888856db4-95wcl\" (UID: \"281c5c01-2404-44c1-a270-e9b124dc5425\") " pod="cert-manager/cert-manager-webhook-6888856db4-95wcl" Mar 13 13:04:40.791172 master-0 kubenswrapper[28149]: I0313 13:04:40.788932 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/281c5c01-2404-44c1-a270-e9b124dc5425-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-95wcl\" (UID: \"281c5c01-2404-44c1-a270-e9b124dc5425\") " pod="cert-manager/cert-manager-webhook-6888856db4-95wcl" Mar 13 13:04:40.796245 master-0 kubenswrapper[28149]: I0313 13:04:40.794127 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knct4\" (UniqueName: \"kubernetes.io/projected/281c5c01-2404-44c1-a270-e9b124dc5425-kube-api-access-knct4\") pod \"cert-manager-webhook-6888856db4-95wcl\" (UID: \"281c5c01-2404-44c1-a270-e9b124dc5425\") " pod="cert-manager/cert-manager-webhook-6888856db4-95wcl" Mar 13 13:04:40.862794 master-0 kubenswrapper[28149]: I0313 13:04:40.862743 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-95wcl" Mar 13 13:04:41.546942 master-0 kubenswrapper[28149]: I0313 13:04:41.546893 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-95wcl"] Mar 13 13:04:41.564394 master-0 kubenswrapper[28149]: W0313 13:04:41.564274 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod281c5c01_2404_44c1_a270_e9b124dc5425.slice/crio-0255581654b1974a92b6783719e587b8310c9f89f09cefb3f05924a8e38e6d15 WatchSource:0}: Error finding container 0255581654b1974a92b6783719e587b8310c9f89f09cefb3f05924a8e38e6d15: Status 404 returned error can't find the container with id 0255581654b1974a92b6783719e587b8310c9f89f09cefb3f05924a8e38e6d15 Mar 13 13:04:42.505168 master-0 kubenswrapper[28149]: I0313 13:04:42.505095 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-95wcl" event={"ID":"281c5c01-2404-44c1-a270-e9b124dc5425","Type":"ContainerStarted","Data":"0255581654b1974a92b6783719e587b8310c9f89f09cefb3f05924a8e38e6d15"} Mar 13 13:04:44.479841 master-0 kubenswrapper[28149]: I0313 13:04:44.479772 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qphfr"] Mar 13 13:04:44.481237 master-0 kubenswrapper[28149]: I0313 13:04:44.481133 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qphfr" Mar 13 13:04:44.484873 master-0 kubenswrapper[28149]: I0313 13:04:44.484797 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 13 13:04:44.485206 master-0 kubenswrapper[28149]: I0313 13:04:44.484803 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 13 13:04:44.515269 master-0 kubenswrapper[28149]: I0313 13:04:44.515155 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qphfr"] Mar 13 13:04:44.540225 master-0 kubenswrapper[28149]: I0313 13:04:44.540171 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s5mn\" (UniqueName: \"kubernetes.io/projected/995e4423-8e69-4431-b853-8cdd43d3ecdf-kube-api-access-4s5mn\") pod \"obo-prometheus-operator-68bc856cb9-qphfr\" (UID: \"995e4423-8e69-4431-b853-8cdd43d3ecdf\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qphfr" Mar 13 13:04:44.625197 master-0 kubenswrapper[28149]: I0313 13:04:44.624820 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl"] Mar 13 13:04:44.635073 master-0 kubenswrapper[28149]: I0313 13:04:44.626325 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl" Mar 13 13:04:44.635073 master-0 kubenswrapper[28149]: I0313 13:04:44.629163 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 13 13:04:44.646180 master-0 kubenswrapper[28149]: I0313 13:04:44.641490 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31862875-1bab-4461-92b9-238e305747f3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl\" (UID: \"31862875-1bab-4461-92b9-238e305747f3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl" Mar 13 13:04:44.646180 master-0 kubenswrapper[28149]: I0313 13:04:44.641591 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s5mn\" (UniqueName: \"kubernetes.io/projected/995e4423-8e69-4431-b853-8cdd43d3ecdf-kube-api-access-4s5mn\") pod \"obo-prometheus-operator-68bc856cb9-qphfr\" (UID: \"995e4423-8e69-4431-b853-8cdd43d3ecdf\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qphfr" Mar 13 13:04:44.646180 master-0 kubenswrapper[28149]: I0313 13:04:44.641639 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/31862875-1bab-4461-92b9-238e305747f3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl\" (UID: \"31862875-1bab-4461-92b9-238e305747f3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl" Mar 13 13:04:44.646180 master-0 kubenswrapper[28149]: I0313 13:04:44.642029 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl"] Mar 13 13:04:44.681285 master-0 kubenswrapper[28149]: I0313 13:04:44.677452 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv"] Mar 13 13:04:44.681285 master-0 kubenswrapper[28149]: I0313 13:04:44.678779 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv" Mar 13 13:04:44.737230 master-0 kubenswrapper[28149]: I0313 13:04:44.737020 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s5mn\" (UniqueName: \"kubernetes.io/projected/995e4423-8e69-4431-b853-8cdd43d3ecdf-kube-api-access-4s5mn\") pod \"obo-prometheus-operator-68bc856cb9-qphfr\" (UID: \"995e4423-8e69-4431-b853-8cdd43d3ecdf\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qphfr" Mar 13 13:04:44.748006 master-0 kubenswrapper[28149]: I0313 13:04:44.744360 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d103cebd-f32b-4dba-bbc8-3a889b55ab01-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv\" (UID: \"d103cebd-f32b-4dba-bbc8-3a889b55ab01\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv" Mar 13 13:04:44.748006 master-0 kubenswrapper[28149]: I0313 13:04:44.744492 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d103cebd-f32b-4dba-bbc8-3a889b55ab01-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv\" (UID: \"d103cebd-f32b-4dba-bbc8-3a889b55ab01\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv" Mar 13 13:04:44.748006 master-0 kubenswrapper[28149]: I0313 13:04:44.744526 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/31862875-1bab-4461-92b9-238e305747f3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl\" (UID: \"31862875-1bab-4461-92b9-238e305747f3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl" Mar 13 13:04:44.748006 master-0 kubenswrapper[28149]: I0313 13:04:44.744635 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31862875-1bab-4461-92b9-238e305747f3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl\" (UID: \"31862875-1bab-4461-92b9-238e305747f3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl" Mar 13 13:04:44.750295 master-0 kubenswrapper[28149]: I0313 13:04:44.750093 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/31862875-1bab-4461-92b9-238e305747f3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl\" (UID: \"31862875-1bab-4461-92b9-238e305747f3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl" Mar 13 13:04:44.756834 master-0 kubenswrapper[28149]: I0313 13:04:44.753621 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31862875-1bab-4461-92b9-238e305747f3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl\" (UID: \"31862875-1bab-4461-92b9-238e305747f3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl" Mar 13 13:04:44.756834 master-0 kubenswrapper[28149]: I0313 13:04:44.754055 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv"] Mar 13 13:04:44.793164 master-0 kubenswrapper[28149]: I0313 13:04:44.792669 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl" Mar 13 13:04:44.846941 master-0 kubenswrapper[28149]: I0313 13:04:44.846819 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d103cebd-f32b-4dba-bbc8-3a889b55ab01-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv\" (UID: \"d103cebd-f32b-4dba-bbc8-3a889b55ab01\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv" Mar 13 13:04:44.847189 master-0 kubenswrapper[28149]: I0313 13:04:44.846963 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d103cebd-f32b-4dba-bbc8-3a889b55ab01-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv\" (UID: \"d103cebd-f32b-4dba-bbc8-3a889b55ab01\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv" Mar 13 13:04:44.852832 master-0 kubenswrapper[28149]: I0313 13:04:44.850927 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d103cebd-f32b-4dba-bbc8-3a889b55ab01-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv\" (UID: \"d103cebd-f32b-4dba-bbc8-3a889b55ab01\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv" Mar 13 13:04:44.862830 master-0 kubenswrapper[28149]: I0313 13:04:44.855775 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d103cebd-f32b-4dba-bbc8-3a889b55ab01-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv\" (UID: \"d103cebd-f32b-4dba-bbc8-3a889b55ab01\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv" Mar 13 13:04:44.875666 master-0 kubenswrapper[28149]: I0313 13:04:44.865058 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5rqrr"] Mar 13 13:04:44.875666 master-0 kubenswrapper[28149]: I0313 13:04:44.866882 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5rqrr" Mar 13 13:04:44.875666 master-0 kubenswrapper[28149]: I0313 13:04:44.874864 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 13 13:04:44.883015 master-0 kubenswrapper[28149]: I0313 13:04:44.877634 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5rqrr"] Mar 13 13:04:44.883015 master-0 kubenswrapper[28149]: I0313 13:04:44.881403 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qphfr" Mar 13 13:04:44.957336 master-0 kubenswrapper[28149]: I0313 13:04:44.954098 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcsrg\" (UniqueName: \"kubernetes.io/projected/a33a5a2c-fbbc-4cb9-af30-f2c60aca75f5-kube-api-access-qcsrg\") pod \"observability-operator-59bdc8b94-5rqrr\" (UID: \"a33a5a2c-fbbc-4cb9-af30-f2c60aca75f5\") " pod="openshift-operators/observability-operator-59bdc8b94-5rqrr" Mar 13 13:04:44.957336 master-0 kubenswrapper[28149]: I0313 13:04:44.954183 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a33a5a2c-fbbc-4cb9-af30-f2c60aca75f5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5rqrr\" (UID: \"a33a5a2c-fbbc-4cb9-af30-f2c60aca75f5\") " pod="openshift-operators/observability-operator-59bdc8b94-5rqrr" Mar 13 13:04:45.077081 master-0 kubenswrapper[28149]: I0313 13:04:45.076997 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcsrg\" (UniqueName: \"kubernetes.io/projected/a33a5a2c-fbbc-4cb9-af30-f2c60aca75f5-kube-api-access-qcsrg\") pod \"observability-operator-59bdc8b94-5rqrr\" (UID: \"a33a5a2c-fbbc-4cb9-af30-f2c60aca75f5\") " pod="openshift-operators/observability-operator-59bdc8b94-5rqrr" Mar 13 13:04:45.077346 master-0 kubenswrapper[28149]: I0313 13:04:45.077116 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a33a5a2c-fbbc-4cb9-af30-f2c60aca75f5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5rqrr\" (UID: \"a33a5a2c-fbbc-4cb9-af30-f2c60aca75f5\") " pod="openshift-operators/observability-operator-59bdc8b94-5rqrr" Mar 13 13:04:45.095954 master-0 kubenswrapper[28149]: I0313 13:04:45.095911 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a33a5a2c-fbbc-4cb9-af30-f2c60aca75f5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5rqrr\" (UID: \"a33a5a2c-fbbc-4cb9-af30-f2c60aca75f5\") " pod="openshift-operators/observability-operator-59bdc8b94-5rqrr" Mar 13 13:04:45.108112 master-0 kubenswrapper[28149]: I0313 13:04:45.108070 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcsrg\" (UniqueName: \"kubernetes.io/projected/a33a5a2c-fbbc-4cb9-af30-f2c60aca75f5-kube-api-access-qcsrg\") pod \"observability-operator-59bdc8b94-5rqrr\" (UID: \"a33a5a2c-fbbc-4cb9-af30-f2c60aca75f5\") " pod="openshift-operators/observability-operator-59bdc8b94-5rqrr" Mar 13 13:04:45.142363 master-0 kubenswrapper[28149]: I0313 13:04:45.142316 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv" Mar 13 13:04:45.257107 master-0 kubenswrapper[28149]: I0313 13:04:45.255289 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-49bct"] Mar 13 13:04:45.257107 master-0 kubenswrapper[28149]: I0313 13:04:45.256473 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-49bct" Mar 13 13:04:45.282340 master-0 kubenswrapper[28149]: I0313 13:04:45.282286 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-49bct"] Mar 13 13:04:45.399004 master-0 kubenswrapper[28149]: I0313 13:04:45.395768 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqn54\" (UniqueName: \"kubernetes.io/projected/46e88d6d-6585-43dd-8fc3-2165ad505385-kube-api-access-sqn54\") pod \"perses-operator-5bf474d74f-49bct\" (UID: \"46e88d6d-6585-43dd-8fc3-2165ad505385\") " pod="openshift-operators/perses-operator-5bf474d74f-49bct" Mar 13 13:04:45.399004 master-0 kubenswrapper[28149]: I0313 13:04:45.398912 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/46e88d6d-6585-43dd-8fc3-2165ad505385-openshift-service-ca\") pod \"perses-operator-5bf474d74f-49bct\" (UID: \"46e88d6d-6585-43dd-8fc3-2165ad505385\") " pod="openshift-operators/perses-operator-5bf474d74f-49bct" Mar 13 13:04:45.399627 master-0 kubenswrapper[28149]: I0313 13:04:45.399601 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5rqrr" Mar 13 13:04:45.455125 master-0 kubenswrapper[28149]: I0313 13:04:45.455076 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl"] Mar 13 13:04:45.491061 master-0 kubenswrapper[28149]: I0313 13:04:45.488696 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-7sxkw"] Mar 13 13:04:45.523173 master-0 kubenswrapper[28149]: I0313 13:04:45.515571 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/46e88d6d-6585-43dd-8fc3-2165ad505385-openshift-service-ca\") pod \"perses-operator-5bf474d74f-49bct\" (UID: \"46e88d6d-6585-43dd-8fc3-2165ad505385\") " pod="openshift-operators/perses-operator-5bf474d74f-49bct" Mar 13 13:04:45.523173 master-0 kubenswrapper[28149]: I0313 13:04:45.516450 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqn54\" (UniqueName: \"kubernetes.io/projected/46e88d6d-6585-43dd-8fc3-2165ad505385-kube-api-access-sqn54\") pod \"perses-operator-5bf474d74f-49bct\" (UID: \"46e88d6d-6585-43dd-8fc3-2165ad505385\") " pod="openshift-operators/perses-operator-5bf474d74f-49bct" Mar 13 13:04:45.523173 master-0 kubenswrapper[28149]: I0313 13:04:45.517834 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/46e88d6d-6585-43dd-8fc3-2165ad505385-openshift-service-ca\") pod \"perses-operator-5bf474d74f-49bct\" (UID: \"46e88d6d-6585-43dd-8fc3-2165ad505385\") " pod="openshift-operators/perses-operator-5bf474d74f-49bct" Mar 13 13:04:45.540532 master-0 kubenswrapper[28149]: I0313 13:04:45.536004 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-7sxkw" Mar 13 13:04:45.559162 master-0 kubenswrapper[28149]: I0313 13:04:45.557507 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-7sxkw"] Mar 13 13:04:45.572279 master-0 kubenswrapper[28149]: I0313 13:04:45.571028 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqn54\" (UniqueName: \"kubernetes.io/projected/46e88d6d-6585-43dd-8fc3-2165ad505385-kube-api-access-sqn54\") pod \"perses-operator-5bf474d74f-49bct\" (UID: \"46e88d6d-6585-43dd-8fc3-2165ad505385\") " pod="openshift-operators/perses-operator-5bf474d74f-49bct" Mar 13 13:04:45.627902 master-0 kubenswrapper[28149]: I0313 13:04:45.625504 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/752613e2-fb8d-4f08-bea1-d73fc62d472c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-7sxkw\" (UID: \"752613e2-fb8d-4f08-bea1-d73fc62d472c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7sxkw" Mar 13 13:04:45.627902 master-0 kubenswrapper[28149]: I0313 13:04:45.625607 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzp86\" (UniqueName: \"kubernetes.io/projected/752613e2-fb8d-4f08-bea1-d73fc62d472c-kube-api-access-xzp86\") pod \"cert-manager-cainjector-5545bd876-7sxkw\" (UID: \"752613e2-fb8d-4f08-bea1-d73fc62d472c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7sxkw" Mar 13 13:04:45.751421 master-0 kubenswrapper[28149]: I0313 13:04:45.733186 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzp86\" (UniqueName: \"kubernetes.io/projected/752613e2-fb8d-4f08-bea1-d73fc62d472c-kube-api-access-xzp86\") pod \"cert-manager-cainjector-5545bd876-7sxkw\" (UID: \"752613e2-fb8d-4f08-bea1-d73fc62d472c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7sxkw" Mar 13 13:04:45.751421 master-0 kubenswrapper[28149]: I0313 13:04:45.733970 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/752613e2-fb8d-4f08-bea1-d73fc62d472c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-7sxkw\" (UID: \"752613e2-fb8d-4f08-bea1-d73fc62d472c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7sxkw" Mar 13 13:04:45.751421 master-0 kubenswrapper[28149]: I0313 13:04:45.738579 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl" event={"ID":"31862875-1bab-4461-92b9-238e305747f3","Type":"ContainerStarted","Data":"dc5e0d958fd2c06ddff087e12cc97056da41d526f2f88a4969eb46a43204019a"} Mar 13 13:04:45.751421 master-0 kubenswrapper[28149]: I0313 13:04:45.744074 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qphfr"] Mar 13 13:04:45.763450 master-0 kubenswrapper[28149]: I0313 13:04:45.755739 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-49bct" Mar 13 13:04:45.763450 master-0 kubenswrapper[28149]: I0313 13:04:45.762678 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzp86\" (UniqueName: \"kubernetes.io/projected/752613e2-fb8d-4f08-bea1-d73fc62d472c-kube-api-access-xzp86\") pod \"cert-manager-cainjector-5545bd876-7sxkw\" (UID: \"752613e2-fb8d-4f08-bea1-d73fc62d472c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7sxkw" Mar 13 13:04:45.765160 master-0 kubenswrapper[28149]: I0313 13:04:45.765087 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/752613e2-fb8d-4f08-bea1-d73fc62d472c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-7sxkw\" (UID: \"752613e2-fb8d-4f08-bea1-d73fc62d472c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7sxkw" Mar 13 13:04:45.928301 master-0 kubenswrapper[28149]: I0313 13:04:45.928180 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-7sxkw" Mar 13 13:04:45.962213 master-0 kubenswrapper[28149]: I0313 13:04:45.962165 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv"] Mar 13 13:04:46.157311 master-0 kubenswrapper[28149]: I0313 13:04:46.157094 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5rqrr"] Mar 13 13:04:46.171422 master-0 kubenswrapper[28149]: W0313 13:04:46.171350 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda33a5a2c_fbbc_4cb9_af30_f2c60aca75f5.slice/crio-2cbfa85414ef36bdfd65b5063965fc6de0328a31172dd3098b9ecf10de102069 WatchSource:0}: Error finding container 2cbfa85414ef36bdfd65b5063965fc6de0328a31172dd3098b9ecf10de102069: Status 404 returned error can't find the container with id 2cbfa85414ef36bdfd65b5063965fc6de0328a31172dd3098b9ecf10de102069 Mar 13 13:04:46.338428 master-0 kubenswrapper[28149]: I0313 13:04:46.338346 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-49bct"] Mar 13 13:04:46.535171 master-0 kubenswrapper[28149]: I0313 13:04:46.527864 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-7sxkw"] Mar 13 13:04:46.796481 master-0 kubenswrapper[28149]: I0313 13:04:46.796370 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-5rqrr" event={"ID":"a33a5a2c-fbbc-4cb9-af30-f2c60aca75f5","Type":"ContainerStarted","Data":"2cbfa85414ef36bdfd65b5063965fc6de0328a31172dd3098b9ecf10de102069"} Mar 13 13:04:46.826598 master-0 kubenswrapper[28149]: I0313 13:04:46.826514 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-49bct" event={"ID":"46e88d6d-6585-43dd-8fc3-2165ad505385","Type":"ContainerStarted","Data":"2a3239ed4f183e5f8ce6f8b160e1bb2257c3b31e8d261172ba96d65bd37e01e4"} Mar 13 13:04:46.975167 master-0 kubenswrapper[28149]: I0313 13:04:46.973637 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qphfr" event={"ID":"995e4423-8e69-4431-b853-8cdd43d3ecdf","Type":"ContainerStarted","Data":"dfcb3863e5c62a203d74bce694f6f2716a3d6c7963d8ad094dd1a127331d4661"} Mar 13 13:04:46.989161 master-0 kubenswrapper[28149]: I0313 13:04:46.980944 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv" event={"ID":"d103cebd-f32b-4dba-bbc8-3a889b55ab01","Type":"ContainerStarted","Data":"8cd5182f56a474b0d8c330fb8441410c63a7d36ffbc369b23f9ab679cf48c175"} Mar 13 13:04:46.989161 master-0 kubenswrapper[28149]: I0313 13:04:46.982647 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-7sxkw" event={"ID":"752613e2-fb8d-4f08-bea1-d73fc62d472c","Type":"ContainerStarted","Data":"2f24be22308f7616d6c0205fc3f05979b7993fe6f6f11a686d15081aedf7ce72"} Mar 13 13:04:55.856922 master-0 kubenswrapper[28149]: I0313 13:04:55.856838 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" Mar 13 13:04:59.201062 master-0 kubenswrapper[28149]: I0313 13:04:59.195381 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-95wcl" event={"ID":"281c5c01-2404-44c1-a270-e9b124dc5425","Type":"ContainerStarted","Data":"653a0da31a0c42695de738c01710d0e24012b5c2d705959c147d91d3b1bce792"} Mar 13 13:04:59.201062 master-0 kubenswrapper[28149]: I0313 13:04:59.196667 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-95wcl" Mar 13 13:04:59.207583 master-0 kubenswrapper[28149]: I0313 13:04:59.207527 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl" event={"ID":"31862875-1bab-4461-92b9-238e305747f3","Type":"ContainerStarted","Data":"c5f39af479ef3849dcc2b01a9b7dc02bd020bbe0e2255a0fc70c04ab2026d828"} Mar 13 13:04:59.214228 master-0 kubenswrapper[28149]: I0313 13:04:59.213068 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-49bct" event={"ID":"46e88d6d-6585-43dd-8fc3-2165ad505385","Type":"ContainerStarted","Data":"9614a7a336740bb6c71e5e853d664b3dbbfdcd1b6a404d5b812b1ed149ecff3e"} Mar 13 13:04:59.214228 master-0 kubenswrapper[28149]: I0313 13:04:59.213269 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-49bct" Mar 13 13:04:59.217662 master-0 kubenswrapper[28149]: I0313 13:04:59.216708 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-7sxkw" event={"ID":"752613e2-fb8d-4f08-bea1-d73fc62d472c","Type":"ContainerStarted","Data":"637a0b4715eeab2cbd4cd58cb12130db49f8c7219f0471721ecd585faafc1bc5"} Mar 13 13:04:59.220876 master-0 kubenswrapper[28149]: I0313 13:04:59.220818 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-95wcl" podStartSLOduration=2.222454344 podStartE2EDuration="19.220805916s" podCreationTimestamp="2026-03-13 13:04:40 +0000 UTC" firstStartedPulling="2026-03-13 13:04:41.567660588 +0000 UTC m=+655.221125747" lastFinishedPulling="2026-03-13 13:04:58.56601216 +0000 UTC m=+672.219477319" observedRunningTime="2026-03-13 13:04:59.220513858 +0000 UTC m=+672.873979017" watchObservedRunningTime="2026-03-13 13:04:59.220805916 +0000 UTC m=+672.874271075" Mar 13 13:04:59.421231 master-0 kubenswrapper[28149]: I0313 13:04:59.418131 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-49bct" podStartSLOduration=2.10473429 podStartE2EDuration="14.418111253s" podCreationTimestamp="2026-03-13 13:04:45 +0000 UTC" firstStartedPulling="2026-03-13 13:04:46.352642217 +0000 UTC m=+660.006107376" lastFinishedPulling="2026-03-13 13:04:58.66601918 +0000 UTC m=+672.319484339" observedRunningTime="2026-03-13 13:04:59.259426082 +0000 UTC m=+672.912891251" watchObservedRunningTime="2026-03-13 13:04:59.418111253 +0000 UTC m=+673.071576412" Mar 13 13:04:59.421231 master-0 kubenswrapper[28149]: I0313 13:04:59.420497 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-7sxkw" podStartSLOduration=2.293578163 podStartE2EDuration="14.420488899s" podCreationTimestamp="2026-03-13 13:04:45 +0000 UTC" firstStartedPulling="2026-03-13 13:04:46.534418365 +0000 UTC m=+660.187883524" lastFinishedPulling="2026-03-13 13:04:58.661329101 +0000 UTC m=+672.314794260" observedRunningTime="2026-03-13 13:04:59.412816806 +0000 UTC m=+673.066281965" watchObservedRunningTime="2026-03-13 13:04:59.420488899 +0000 UTC m=+673.073954058" Mar 13 13:04:59.470074 master-0 kubenswrapper[28149]: I0313 13:04:59.464791 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-z9vcl" podStartSLOduration=2.277896443 podStartE2EDuration="15.46476195s" podCreationTimestamp="2026-03-13 13:04:44 +0000 UTC" firstStartedPulling="2026-03-13 13:04:45.466662468 +0000 UTC m=+659.120127627" lastFinishedPulling="2026-03-13 13:04:58.653527975 +0000 UTC m=+672.306993134" observedRunningTime="2026-03-13 13:04:59.455104785 +0000 UTC m=+673.108569954" watchObservedRunningTime="2026-03-13 13:04:59.46476195 +0000 UTC m=+673.118227109" Mar 13 13:05:00.227476 master-0 kubenswrapper[28149]: I0313 13:05:00.227393 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-5rqrr" event={"ID":"a33a5a2c-fbbc-4cb9-af30-f2c60aca75f5","Type":"ContainerStarted","Data":"ec11005f996b2c0ec9ec9dfd124aa3c4a0c9c9a35d9de1ce777616fad19780eb"} Mar 13 13:05:00.228078 master-0 kubenswrapper[28149]: I0313 13:05:00.227587 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-5rqrr" Mar 13 13:05:00.229277 master-0 kubenswrapper[28149]: I0313 13:05:00.229237 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qphfr" event={"ID":"995e4423-8e69-4431-b853-8cdd43d3ecdf","Type":"ContainerStarted","Data":"e3a93647301894711a9078f0b03f5115af30e3d0edf9e6652c7348ebd4b3875e"} Mar 13 13:05:00.230858 master-0 kubenswrapper[28149]: I0313 13:05:00.230800 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv" event={"ID":"d103cebd-f32b-4dba-bbc8-3a889b55ab01","Type":"ContainerStarted","Data":"7cb7ad826882fbe2116da42cf80015589af478a8960320e5192ce3dd49e1800d"} Mar 13 13:05:00.517701 master-0 kubenswrapper[28149]: I0313 13:05:00.515322 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-5rqrr" podStartSLOduration=3.983659635 podStartE2EDuration="16.515291723s" podCreationTimestamp="2026-03-13 13:04:44 +0000 UTC" firstStartedPulling="2026-03-13 13:04:46.173922063 +0000 UTC m=+659.827387222" lastFinishedPulling="2026-03-13 13:04:58.705554151 +0000 UTC m=+672.359019310" observedRunningTime="2026-03-13 13:05:00.26092367 +0000 UTC m=+673.914388879" watchObservedRunningTime="2026-03-13 13:05:00.515291723 +0000 UTC m=+674.168756892" Mar 13 13:05:00.517701 master-0 kubenswrapper[28149]: I0313 13:05:00.516644 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-5rqrr" Mar 13 13:05:00.543428 master-0 kubenswrapper[28149]: I0313 13:05:00.543323 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79885b7cf8-ndsjv" podStartSLOduration=3.8422346210000002 podStartE2EDuration="16.543294746s" podCreationTimestamp="2026-03-13 13:04:44 +0000 UTC" firstStartedPulling="2026-03-13 13:04:45.992931997 +0000 UTC m=+659.646397156" lastFinishedPulling="2026-03-13 13:04:58.693992122 +0000 UTC m=+672.347457281" observedRunningTime="2026-03-13 13:05:00.532553149 +0000 UTC m=+674.186018308" watchObservedRunningTime="2026-03-13 13:05:00.543294746 +0000 UTC m=+674.196759915" Mar 13 13:05:00.625809 master-0 kubenswrapper[28149]: I0313 13:05:00.620335 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qphfr" podStartSLOduration=3.7983195370000002 podStartE2EDuration="16.620312942s" podCreationTimestamp="2026-03-13 13:04:44 +0000 UTC" firstStartedPulling="2026-03-13 13:04:45.832410935 +0000 UTC m=+659.485876094" lastFinishedPulling="2026-03-13 13:04:58.65440434 +0000 UTC m=+672.307869499" observedRunningTime="2026-03-13 13:05:00.613615307 +0000 UTC m=+674.267080486" watchObservedRunningTime="2026-03-13 13:05:00.620312942 +0000 UTC m=+674.273778101" Mar 13 13:05:01.206166 master-0 kubenswrapper[28149]: I0313 13:05:01.204097 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-44lrm"] Mar 13 13:05:01.206166 master-0 kubenswrapper[28149]: I0313 13:05:01.205301 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-44lrm" Mar 13 13:05:01.245428 master-0 kubenswrapper[28149]: I0313 13:05:01.231192 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-44lrm"] Mar 13 13:05:01.278168 master-0 kubenswrapper[28149]: I0313 13:05:01.276671 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hf6x\" (UniqueName: \"kubernetes.io/projected/ac8344e0-b8b2-47a8-a0b4-cf31dc05056e-kube-api-access-9hf6x\") pod \"cert-manager-545d4d4674-44lrm\" (UID: \"ac8344e0-b8b2-47a8-a0b4-cf31dc05056e\") " pod="cert-manager/cert-manager-545d4d4674-44lrm" Mar 13 13:05:01.278168 master-0 kubenswrapper[28149]: I0313 13:05:01.276848 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac8344e0-b8b2-47a8-a0b4-cf31dc05056e-bound-sa-token\") pod \"cert-manager-545d4d4674-44lrm\" (UID: \"ac8344e0-b8b2-47a8-a0b4-cf31dc05056e\") " pod="cert-manager/cert-manager-545d4d4674-44lrm" Mar 13 13:05:01.380363 master-0 kubenswrapper[28149]: I0313 13:05:01.380123 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hf6x\" (UniqueName: \"kubernetes.io/projected/ac8344e0-b8b2-47a8-a0b4-cf31dc05056e-kube-api-access-9hf6x\") pod \"cert-manager-545d4d4674-44lrm\" (UID: \"ac8344e0-b8b2-47a8-a0b4-cf31dc05056e\") " pod="cert-manager/cert-manager-545d4d4674-44lrm" Mar 13 13:05:01.380363 master-0 kubenswrapper[28149]: I0313 13:05:01.380309 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac8344e0-b8b2-47a8-a0b4-cf31dc05056e-bound-sa-token\") pod \"cert-manager-545d4d4674-44lrm\" (UID: \"ac8344e0-b8b2-47a8-a0b4-cf31dc05056e\") " pod="cert-manager/cert-manager-545d4d4674-44lrm" Mar 13 13:05:01.402948 master-0 kubenswrapper[28149]: I0313 13:05:01.402239 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac8344e0-b8b2-47a8-a0b4-cf31dc05056e-bound-sa-token\") pod \"cert-manager-545d4d4674-44lrm\" (UID: \"ac8344e0-b8b2-47a8-a0b4-cf31dc05056e\") " pod="cert-manager/cert-manager-545d4d4674-44lrm" Mar 13 13:05:01.415254 master-0 kubenswrapper[28149]: I0313 13:05:01.405614 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hf6x\" (UniqueName: \"kubernetes.io/projected/ac8344e0-b8b2-47a8-a0b4-cf31dc05056e-kube-api-access-9hf6x\") pod \"cert-manager-545d4d4674-44lrm\" (UID: \"ac8344e0-b8b2-47a8-a0b4-cf31dc05056e\") " pod="cert-manager/cert-manager-545d4d4674-44lrm" Mar 13 13:05:01.662231 master-0 kubenswrapper[28149]: I0313 13:05:01.662171 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-44lrm" Mar 13 13:05:02.186382 master-0 kubenswrapper[28149]: I0313 13:05:02.184972 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-44lrm"] Mar 13 13:05:02.266809 master-0 kubenswrapper[28149]: I0313 13:05:02.266693 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-44lrm" event={"ID":"ac8344e0-b8b2-47a8-a0b4-cf31dc05056e","Type":"ContainerStarted","Data":"780f0d426c5c4f31790c592643a6135953df8bb7bc2184cf8d2f27fc68c85917"} Mar 13 13:05:03.279020 master-0 kubenswrapper[28149]: I0313 13:05:03.278978 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-44lrm" event={"ID":"ac8344e0-b8b2-47a8-a0b4-cf31dc05056e","Type":"ContainerStarted","Data":"18874c0c60362645f756beced17b1f8af8dd9f9d4be43dfc6a4feb9004f91c52"} Mar 13 13:05:03.312960 master-0 kubenswrapper[28149]: I0313 13:05:03.312830 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-44lrm" podStartSLOduration=2.312742902 podStartE2EDuration="2.312742902s" podCreationTimestamp="2026-03-13 13:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:05:03.303828445 +0000 UTC m=+676.957293614" watchObservedRunningTime="2026-03-13 13:05:03.312742902 +0000 UTC m=+676.966208061" Mar 13 13:05:05.759958 master-0 kubenswrapper[28149]: I0313 13:05:05.759896 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-49bct" Mar 13 13:05:05.865319 master-0 kubenswrapper[28149]: I0313 13:05:05.865266 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-95wcl" Mar 13 13:05:15.176442 master-0 kubenswrapper[28149]: I0313 13:05:15.176366 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-67c6bd779f-5djh4" Mar 13 13:05:24.816518 master-0 kubenswrapper[28149]: I0313 13:05:24.816439 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7"] Mar 13 13:05:24.821184 master-0 kubenswrapper[28149]: I0313 13:05:24.817902 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7" Mar 13 13:05:24.821792 master-0 kubenswrapper[28149]: I0313 13:05:24.821753 28149 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 13 13:05:24.828981 master-0 kubenswrapper[28149]: I0313 13:05:24.828931 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-qb9kf"] Mar 13 13:05:24.833032 master-0 kubenswrapper[28149]: I0313 13:05:24.832994 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.839109 master-0 kubenswrapper[28149]: I0313 13:05:24.838964 28149 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 13 13:05:24.839109 master-0 kubenswrapper[28149]: I0313 13:05:24.839005 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 13 13:05:24.865922 master-0 kubenswrapper[28149]: I0313 13:05:24.865876 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7"] Mar 13 13:05:24.899925 master-0 kubenswrapper[28149]: I0313 13:05:24.888545 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzr24\" (UniqueName: \"kubernetes.io/projected/98a87277-3308-4819-8cad-1f0c2d5d97e1-kube-api-access-tzr24\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.899925 master-0 kubenswrapper[28149]: I0313 13:05:24.888625 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98a87277-3308-4819-8cad-1f0c2d5d97e1-metrics-certs\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.899925 master-0 kubenswrapper[28149]: I0313 13:05:24.888649 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ff627b0-a4b7-4741-9934-afc226d587b8-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-v5st7\" (UID: \"6ff627b0-a4b7-4741-9934-afc226d587b8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7" Mar 13 13:05:24.899925 master-0 kubenswrapper[28149]: I0313 13:05:24.888667 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/98a87277-3308-4819-8cad-1f0c2d5d97e1-metrics\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.899925 master-0 kubenswrapper[28149]: I0313 13:05:24.888747 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/98a87277-3308-4819-8cad-1f0c2d5d97e1-frr-startup\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.899925 master-0 kubenswrapper[28149]: I0313 13:05:24.888789 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/98a87277-3308-4819-8cad-1f0c2d5d97e1-reloader\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.899925 master-0 kubenswrapper[28149]: I0313 13:05:24.888823 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/98a87277-3308-4819-8cad-1f0c2d5d97e1-frr-sockets\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.899925 master-0 kubenswrapper[28149]: I0313 13:05:24.888858 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5xml\" (UniqueName: \"kubernetes.io/projected/6ff627b0-a4b7-4741-9934-afc226d587b8-kube-api-access-m5xml\") pod \"frr-k8s-webhook-server-bcc4b6f68-v5st7\" (UID: \"6ff627b0-a4b7-4741-9934-afc226d587b8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7" Mar 13 13:05:24.899925 master-0 kubenswrapper[28149]: I0313 13:05:24.888874 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/98a87277-3308-4819-8cad-1f0c2d5d97e1-frr-conf\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.996160 master-0 kubenswrapper[28149]: I0313 13:05:24.993819 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzr24\" (UniqueName: \"kubernetes.io/projected/98a87277-3308-4819-8cad-1f0c2d5d97e1-kube-api-access-tzr24\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.996160 master-0 kubenswrapper[28149]: I0313 13:05:24.993942 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98a87277-3308-4819-8cad-1f0c2d5d97e1-metrics-certs\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.996160 master-0 kubenswrapper[28149]: I0313 13:05:24.993983 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ff627b0-a4b7-4741-9934-afc226d587b8-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-v5st7\" (UID: \"6ff627b0-a4b7-4741-9934-afc226d587b8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7" Mar 13 13:05:24.996160 master-0 kubenswrapper[28149]: I0313 13:05:24.994011 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/98a87277-3308-4819-8cad-1f0c2d5d97e1-metrics\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.996160 master-0 kubenswrapper[28149]: I0313 13:05:24.994049 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/98a87277-3308-4819-8cad-1f0c2d5d97e1-frr-startup\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.996160 master-0 kubenswrapper[28149]: I0313 13:05:24.994099 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/98a87277-3308-4819-8cad-1f0c2d5d97e1-reloader\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.996160 master-0 kubenswrapper[28149]: I0313 13:05:24.994162 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/98a87277-3308-4819-8cad-1f0c2d5d97e1-frr-sockets\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.996160 master-0 kubenswrapper[28149]: I0313 13:05:24.994208 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5xml\" (UniqueName: \"kubernetes.io/projected/6ff627b0-a4b7-4741-9934-afc226d587b8-kube-api-access-m5xml\") pod \"frr-k8s-webhook-server-bcc4b6f68-v5st7\" (UID: \"6ff627b0-a4b7-4741-9934-afc226d587b8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7" Mar 13 13:05:24.996160 master-0 kubenswrapper[28149]: I0313 13:05:24.994233 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/98a87277-3308-4819-8cad-1f0c2d5d97e1-frr-conf\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.996160 master-0 kubenswrapper[28149]: I0313 13:05:24.994704 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/98a87277-3308-4819-8cad-1f0c2d5d97e1-frr-conf\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.996160 master-0 kubenswrapper[28149]: I0313 13:05:24.995975 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/98a87277-3308-4819-8cad-1f0c2d5d97e1-frr-sockets\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.996936 master-0 kubenswrapper[28149]: I0313 13:05:24.996239 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/98a87277-3308-4819-8cad-1f0c2d5d97e1-frr-startup\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.996936 master-0 kubenswrapper[28149]: I0313 13:05:24.996298 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/98a87277-3308-4819-8cad-1f0c2d5d97e1-reloader\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:24.996936 master-0 kubenswrapper[28149]: I0313 13:05:24.996731 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/98a87277-3308-4819-8cad-1f0c2d5d97e1-metrics\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:25.003234 master-0 kubenswrapper[28149]: I0313 13:05:24.999608 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98a87277-3308-4819-8cad-1f0c2d5d97e1-metrics-certs\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:25.003234 master-0 kubenswrapper[28149]: I0313 13:05:25.002845 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ff627b0-a4b7-4741-9934-afc226d587b8-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-v5st7\" (UID: \"6ff627b0-a4b7-4741-9934-afc226d587b8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7" Mar 13 13:05:25.016117 master-0 kubenswrapper[28149]: I0313 13:05:25.016045 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzr24\" (UniqueName: \"kubernetes.io/projected/98a87277-3308-4819-8cad-1f0c2d5d97e1-kube-api-access-tzr24\") pod \"frr-k8s-qb9kf\" (UID: \"98a87277-3308-4819-8cad-1f0c2d5d97e1\") " pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:25.025161 master-0 kubenswrapper[28149]: I0313 13:05:25.017612 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5xml\" (UniqueName: \"kubernetes.io/projected/6ff627b0-a4b7-4741-9934-afc226d587b8-kube-api-access-m5xml\") pod \"frr-k8s-webhook-server-bcc4b6f68-v5st7\" (UID: \"6ff627b0-a4b7-4741-9934-afc226d587b8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7" Mar 13 13:05:25.088226 master-0 kubenswrapper[28149]: I0313 13:05:25.085800 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-psd47"] Mar 13 13:05:25.088226 master-0 kubenswrapper[28149]: I0313 13:05:25.087557 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-psd47" Mar 13 13:05:25.090883 master-0 kubenswrapper[28149]: I0313 13:05:25.090613 28149 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 13 13:05:25.102080 master-0 kubenswrapper[28149]: I0313 13:05:25.100558 28149 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 13 13:05:25.102080 master-0 kubenswrapper[28149]: I0313 13:05:25.100844 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 13 13:05:25.121818 master-0 kubenswrapper[28149]: I0313 13:05:25.121738 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-qlkcr"] Mar 13 13:05:25.124824 master-0 kubenswrapper[28149]: I0313 13:05:25.124757 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-qlkcr" Mar 13 13:05:25.132398 master-0 kubenswrapper[28149]: I0313 13:05:25.132338 28149 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 13 13:05:25.157314 master-0 kubenswrapper[28149]: I0313 13:05:25.157253 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-qlkcr"] Mar 13 13:05:25.181029 master-0 kubenswrapper[28149]: I0313 13:05:25.180966 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7" Mar 13 13:05:25.185741 master-0 kubenswrapper[28149]: I0313 13:05:25.185693 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:25.203699 master-0 kubenswrapper[28149]: I0313 13:05:25.203626 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krv2s\" (UniqueName: \"kubernetes.io/projected/b6261d07-1e69-4080-a219-f07f8d607f07-kube-api-access-krv2s\") pod \"controller-7bb4cc7c98-qlkcr\" (UID: \"b6261d07-1e69-4080-a219-f07f8d607f07\") " pod="metallb-system/controller-7bb4cc7c98-qlkcr" Mar 13 13:05:25.203964 master-0 kubenswrapper[28149]: I0313 13:05:25.203708 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6261d07-1e69-4080-a219-f07f8d607f07-metrics-certs\") pod \"controller-7bb4cc7c98-qlkcr\" (UID: \"b6261d07-1e69-4080-a219-f07f8d607f07\") " pod="metallb-system/controller-7bb4cc7c98-qlkcr" Mar 13 13:05:25.203964 master-0 kubenswrapper[28149]: I0313 13:05:25.203761 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6261d07-1e69-4080-a219-f07f8d607f07-cert\") pod \"controller-7bb4cc7c98-qlkcr\" (UID: \"b6261d07-1e69-4080-a219-f07f8d607f07\") " pod="metallb-system/controller-7bb4cc7c98-qlkcr" Mar 13 13:05:25.203964 master-0 kubenswrapper[28149]: I0313 13:05:25.203806 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/13c11fb6-9816-4c99-9acb-0cf5d8249219-memberlist\") pod \"speaker-psd47\" (UID: \"13c11fb6-9816-4c99-9acb-0cf5d8249219\") " pod="metallb-system/speaker-psd47" Mar 13 13:05:25.203964 master-0 kubenswrapper[28149]: I0313 13:05:25.203830 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f64v9\" (UniqueName: \"kubernetes.io/projected/13c11fb6-9816-4c99-9acb-0cf5d8249219-kube-api-access-f64v9\") pod \"speaker-psd47\" (UID: \"13c11fb6-9816-4c99-9acb-0cf5d8249219\") " pod="metallb-system/speaker-psd47" Mar 13 13:05:25.203964 master-0 kubenswrapper[28149]: I0313 13:05:25.203884 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13c11fb6-9816-4c99-9acb-0cf5d8249219-metrics-certs\") pod \"speaker-psd47\" (UID: \"13c11fb6-9816-4c99-9acb-0cf5d8249219\") " pod="metallb-system/speaker-psd47" Mar 13 13:05:25.203964 master-0 kubenswrapper[28149]: I0313 13:05:25.203941 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/13c11fb6-9816-4c99-9acb-0cf5d8249219-metallb-excludel2\") pod \"speaker-psd47\" (UID: \"13c11fb6-9816-4c99-9acb-0cf5d8249219\") " pod="metallb-system/speaker-psd47" Mar 13 13:05:25.314316 master-0 kubenswrapper[28149]: I0313 13:05:25.312931 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/13c11fb6-9816-4c99-9acb-0cf5d8249219-metallb-excludel2\") pod \"speaker-psd47\" (UID: \"13c11fb6-9816-4c99-9acb-0cf5d8249219\") " pod="metallb-system/speaker-psd47" Mar 13 13:05:25.314316 master-0 kubenswrapper[28149]: I0313 13:05:25.313131 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krv2s\" (UniqueName: \"kubernetes.io/projected/b6261d07-1e69-4080-a219-f07f8d607f07-kube-api-access-krv2s\") pod \"controller-7bb4cc7c98-qlkcr\" (UID: \"b6261d07-1e69-4080-a219-f07f8d607f07\") " pod="metallb-system/controller-7bb4cc7c98-qlkcr" Mar 13 13:05:25.314316 master-0 kubenswrapper[28149]: I0313 13:05:25.313185 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6261d07-1e69-4080-a219-f07f8d607f07-metrics-certs\") pod \"controller-7bb4cc7c98-qlkcr\" (UID: \"b6261d07-1e69-4080-a219-f07f8d607f07\") " pod="metallb-system/controller-7bb4cc7c98-qlkcr" Mar 13 13:05:25.314316 master-0 kubenswrapper[28149]: I0313 13:05:25.313424 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6261d07-1e69-4080-a219-f07f8d607f07-cert\") pod \"controller-7bb4cc7c98-qlkcr\" (UID: \"b6261d07-1e69-4080-a219-f07f8d607f07\") " pod="metallb-system/controller-7bb4cc7c98-qlkcr" Mar 13 13:05:25.314316 master-0 kubenswrapper[28149]: I0313 13:05:25.313545 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/13c11fb6-9816-4c99-9acb-0cf5d8249219-memberlist\") pod \"speaker-psd47\" (UID: \"13c11fb6-9816-4c99-9acb-0cf5d8249219\") " pod="metallb-system/speaker-psd47" Mar 13 13:05:25.314316 master-0 kubenswrapper[28149]: I0313 13:05:25.313658 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f64v9\" (UniqueName: \"kubernetes.io/projected/13c11fb6-9816-4c99-9acb-0cf5d8249219-kube-api-access-f64v9\") pod \"speaker-psd47\" (UID: \"13c11fb6-9816-4c99-9acb-0cf5d8249219\") " pod="metallb-system/speaker-psd47" Mar 13 13:05:25.314316 master-0 kubenswrapper[28149]: I0313 13:05:25.313788 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13c11fb6-9816-4c99-9acb-0cf5d8249219-metrics-certs\") pod \"speaker-psd47\" (UID: \"13c11fb6-9816-4c99-9acb-0cf5d8249219\") " pod="metallb-system/speaker-psd47" Mar 13 13:05:25.314316 master-0 kubenswrapper[28149]: I0313 13:05:25.314042 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/13c11fb6-9816-4c99-9acb-0cf5d8249219-metallb-excludel2\") pod \"speaker-psd47\" (UID: \"13c11fb6-9816-4c99-9acb-0cf5d8249219\") " pod="metallb-system/speaker-psd47" Mar 13 13:05:25.314316 master-0 kubenswrapper[28149]: E0313 13:05:25.314253 28149 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 13 13:05:25.314316 master-0 kubenswrapper[28149]: E0313 13:05:25.314325 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13c11fb6-9816-4c99-9acb-0cf5d8249219-memberlist podName:13c11fb6-9816-4c99-9acb-0cf5d8249219 nodeName:}" failed. No retries permitted until 2026-03-13 13:05:25.814298856 +0000 UTC m=+699.467764005 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/13c11fb6-9816-4c99-9acb-0cf5d8249219-memberlist") pod "speaker-psd47" (UID: "13c11fb6-9816-4c99-9acb-0cf5d8249219") : secret "metallb-memberlist" not found Mar 13 13:05:25.314923 master-0 kubenswrapper[28149]: E0313 13:05:25.314338 28149 secret.go:189] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Mar 13 13:05:25.314923 master-0 kubenswrapper[28149]: E0313 13:05:25.314409 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13c11fb6-9816-4c99-9acb-0cf5d8249219-metrics-certs podName:13c11fb6-9816-4c99-9acb-0cf5d8249219 nodeName:}" failed. No retries permitted until 2026-03-13 13:05:25.814385338 +0000 UTC m=+699.467850557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/13c11fb6-9816-4c99-9acb-0cf5d8249219-metrics-certs") pod "speaker-psd47" (UID: "13c11fb6-9816-4c99-9acb-0cf5d8249219") : secret "speaker-certs-secret" not found Mar 13 13:05:25.317070 master-0 kubenswrapper[28149]: I0313 13:05:25.317038 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6261d07-1e69-4080-a219-f07f8d607f07-metrics-certs\") pod \"controller-7bb4cc7c98-qlkcr\" (UID: \"b6261d07-1e69-4080-a219-f07f8d607f07\") " pod="metallb-system/controller-7bb4cc7c98-qlkcr" Mar 13 13:05:25.318099 master-0 kubenswrapper[28149]: I0313 13:05:25.318077 28149 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 13 13:05:25.337111 master-0 kubenswrapper[28149]: I0313 13:05:25.335970 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6261d07-1e69-4080-a219-f07f8d607f07-cert\") pod \"controller-7bb4cc7c98-qlkcr\" (UID: \"b6261d07-1e69-4080-a219-f07f8d607f07\") " pod="metallb-system/controller-7bb4cc7c98-qlkcr" Mar 13 13:05:25.347491 master-0 kubenswrapper[28149]: I0313 13:05:25.347356 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krv2s\" (UniqueName: \"kubernetes.io/projected/b6261d07-1e69-4080-a219-f07f8d607f07-kube-api-access-krv2s\") pod \"controller-7bb4cc7c98-qlkcr\" (UID: \"b6261d07-1e69-4080-a219-f07f8d607f07\") " pod="metallb-system/controller-7bb4cc7c98-qlkcr" Mar 13 13:05:25.347907 master-0 kubenswrapper[28149]: I0313 13:05:25.347879 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f64v9\" (UniqueName: \"kubernetes.io/projected/13c11fb6-9816-4c99-9acb-0cf5d8249219-kube-api-access-f64v9\") pod \"speaker-psd47\" (UID: \"13c11fb6-9816-4c99-9acb-0cf5d8249219\") " pod="metallb-system/speaker-psd47" Mar 13 13:05:25.451672 master-0 kubenswrapper[28149]: I0313 13:05:25.451613 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-qlkcr" Mar 13 13:05:25.565701 master-0 kubenswrapper[28149]: I0313 13:05:25.565645 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb9kf" event={"ID":"98a87277-3308-4819-8cad-1f0c2d5d97e1","Type":"ContainerStarted","Data":"25fe59d181101f0c013c4ec446ec413b3e7faaf99f9c6cea9a0681dcc97360b4"} Mar 13 13:05:25.648397 master-0 kubenswrapper[28149]: W0313 13:05:25.648343 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ff627b0_a4b7_4741_9934_afc226d587b8.slice/crio-1befab54958c59709db61c82bbd742d466d6ec2405fd140239b03aa7c11f392e WatchSource:0}: Error finding container 1befab54958c59709db61c82bbd742d466d6ec2405fd140239b03aa7c11f392e: Status 404 returned error can't find the container with id 1befab54958c59709db61c82bbd742d466d6ec2405fd140239b03aa7c11f392e Mar 13 13:05:25.648547 master-0 kubenswrapper[28149]: I0313 13:05:25.648406 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7"] Mar 13 13:05:25.824821 master-0 kubenswrapper[28149]: I0313 13:05:25.824776 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/13c11fb6-9816-4c99-9acb-0cf5d8249219-memberlist\") pod \"speaker-psd47\" (UID: \"13c11fb6-9816-4c99-9acb-0cf5d8249219\") " pod="metallb-system/speaker-psd47" Mar 13 13:05:25.825449 master-0 kubenswrapper[28149]: I0313 13:05:25.825426 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13c11fb6-9816-4c99-9acb-0cf5d8249219-metrics-certs\") pod \"speaker-psd47\" (UID: \"13c11fb6-9816-4c99-9acb-0cf5d8249219\") " pod="metallb-system/speaker-psd47" Mar 13 13:05:25.825605 master-0 kubenswrapper[28149]: E0313 13:05:25.824987 28149 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 13 13:05:25.825753 master-0 kubenswrapper[28149]: E0313 13:05:25.825738 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13c11fb6-9816-4c99-9acb-0cf5d8249219-memberlist podName:13c11fb6-9816-4c99-9acb-0cf5d8249219 nodeName:}" failed. No retries permitted until 2026-03-13 13:05:26.825714834 +0000 UTC m=+700.479179993 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/13c11fb6-9816-4c99-9acb-0cf5d8249219-memberlist") pod "speaker-psd47" (UID: "13c11fb6-9816-4c99-9acb-0cf5d8249219") : secret "metallb-memberlist" not found Mar 13 13:05:25.828426 master-0 kubenswrapper[28149]: I0313 13:05:25.828401 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13c11fb6-9816-4c99-9acb-0cf5d8249219-metrics-certs\") pod \"speaker-psd47\" (UID: \"13c11fb6-9816-4c99-9acb-0cf5d8249219\") " pod="metallb-system/speaker-psd47" Mar 13 13:05:25.899163 master-0 kubenswrapper[28149]: I0313 13:05:25.899077 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-qlkcr"] Mar 13 13:05:26.576288 master-0 kubenswrapper[28149]: I0313 13:05:26.576227 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-qlkcr" event={"ID":"b6261d07-1e69-4080-a219-f07f8d607f07","Type":"ContainerStarted","Data":"2b2e5567a3d05c83d99904d8fa71ef5159945662fd0a51bdbb0f24b8acc3285a"} Mar 13 13:05:26.576288 master-0 kubenswrapper[28149]: I0313 13:05:26.576276 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-qlkcr" event={"ID":"b6261d07-1e69-4080-a219-f07f8d607f07","Type":"ContainerStarted","Data":"4f10c777a80b709c69560b61950fe242b4fbb4e4561c4cdbaf457fc30f457288"} Mar 13 13:05:26.578516 master-0 kubenswrapper[28149]: I0313 13:05:26.578066 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7" event={"ID":"6ff627b0-a4b7-4741-9934-afc226d587b8","Type":"ContainerStarted","Data":"1befab54958c59709db61c82bbd742d466d6ec2405fd140239b03aa7c11f392e"} Mar 13 13:05:26.845756 master-0 kubenswrapper[28149]: I0313 13:05:26.845651 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/13c11fb6-9816-4c99-9acb-0cf5d8249219-memberlist\") pod \"speaker-psd47\" (UID: \"13c11fb6-9816-4c99-9acb-0cf5d8249219\") " pod="metallb-system/speaker-psd47" Mar 13 13:05:26.848790 master-0 kubenswrapper[28149]: I0313 13:05:26.848744 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/13c11fb6-9816-4c99-9acb-0cf5d8249219-memberlist\") pod \"speaker-psd47\" (UID: \"13c11fb6-9816-4c99-9acb-0cf5d8249219\") " pod="metallb-system/speaker-psd47" Mar 13 13:05:26.933748 master-0 kubenswrapper[28149]: I0313 13:05:26.933696 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-psd47" Mar 13 13:05:26.986973 master-0 kubenswrapper[28149]: I0313 13:05:26.986521 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-td5jm"] Mar 13 13:05:26.991208 master-0 kubenswrapper[28149]: I0313 13:05:26.988222 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-td5jm" Mar 13 13:05:26.991806 master-0 kubenswrapper[28149]: I0313 13:05:26.991777 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 13 13:05:27.046096 master-0 kubenswrapper[28149]: W0313 13:05:27.045829 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13c11fb6_9816_4c99_9acb_0cf5d8249219.slice/crio-2edbb646a7aba1bcd6e3d6955e999e67ae7da26b9e97646133985a92f26e6717 WatchSource:0}: Error finding container 2edbb646a7aba1bcd6e3d6955e999e67ae7da26b9e97646133985a92f26e6717: Status 404 returned error can't find the container with id 2edbb646a7aba1bcd6e3d6955e999e67ae7da26b9e97646133985a92f26e6717 Mar 13 13:05:27.049231 master-0 kubenswrapper[28149]: I0313 13:05:27.049127 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/777e9dfe-3c9f-42e0-a52f-46921c2b2035-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-td5jm\" (UID: \"777e9dfe-3c9f-42e0-a52f-46921c2b2035\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-td5jm" Mar 13 13:05:27.049512 master-0 kubenswrapper[28149]: I0313 13:05:27.049476 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r72n4\" (UniqueName: \"kubernetes.io/projected/777e9dfe-3c9f-42e0-a52f-46921c2b2035-kube-api-access-r72n4\") pod \"nmstate-webhook-5f558f5558-td5jm\" (UID: \"777e9dfe-3c9f-42e0-a52f-46921c2b2035\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-td5jm" Mar 13 13:05:27.050765 master-0 kubenswrapper[28149]: I0313 13:05:27.050699 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-phpcs"] Mar 13 13:05:27.052918 master-0 kubenswrapper[28149]: I0313 13:05:27.052887 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-phpcs" Mar 13 13:05:27.083615 master-0 kubenswrapper[28149]: I0313 13:05:27.083557 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-mgslk"] Mar 13 13:05:27.085021 master-0 kubenswrapper[28149]: I0313 13:05:27.084995 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:27.097494 master-0 kubenswrapper[28149]: I0313 13:05:27.097168 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-td5jm"] Mar 13 13:05:27.130310 master-0 kubenswrapper[28149]: I0313 13:05:27.130261 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-phpcs"] Mar 13 13:05:27.153043 master-0 kubenswrapper[28149]: I0313 13:05:27.151653 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/fbe355d7-e23b-4062-b0c1-eee76f83db50-ovs-socket\") pod \"nmstate-handler-mgslk\" (UID: \"fbe355d7-e23b-4062-b0c1-eee76f83db50\") " pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:27.153043 master-0 kubenswrapper[28149]: I0313 13:05:27.151706 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r72n4\" (UniqueName: \"kubernetes.io/projected/777e9dfe-3c9f-42e0-a52f-46921c2b2035-kube-api-access-r72n4\") pod \"nmstate-webhook-5f558f5558-td5jm\" (UID: \"777e9dfe-3c9f-42e0-a52f-46921c2b2035\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-td5jm" Mar 13 13:05:27.153043 master-0 kubenswrapper[28149]: I0313 13:05:27.151776 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc67t\" (UniqueName: \"kubernetes.io/projected/fbe355d7-e23b-4062-b0c1-eee76f83db50-kube-api-access-zc67t\") pod \"nmstate-handler-mgslk\" (UID: \"fbe355d7-e23b-4062-b0c1-eee76f83db50\") " pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:27.153043 master-0 kubenswrapper[28149]: I0313 13:05:27.151829 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/777e9dfe-3c9f-42e0-a52f-46921c2b2035-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-td5jm\" (UID: \"777e9dfe-3c9f-42e0-a52f-46921c2b2035\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-td5jm" Mar 13 13:05:27.153043 master-0 kubenswrapper[28149]: I0313 13:05:27.151874 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/fbe355d7-e23b-4062-b0c1-eee76f83db50-nmstate-lock\") pod \"nmstate-handler-mgslk\" (UID: \"fbe355d7-e23b-4062-b0c1-eee76f83db50\") " pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:27.153043 master-0 kubenswrapper[28149]: I0313 13:05:27.151903 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5wcq\" (UniqueName: \"kubernetes.io/projected/923ffd1d-4494-4d25-bc3c-fdfa84a296e5-kube-api-access-g5wcq\") pod \"nmstate-metrics-9b8c8685d-phpcs\" (UID: \"923ffd1d-4494-4d25-bc3c-fdfa84a296e5\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-phpcs" Mar 13 13:05:27.153043 master-0 kubenswrapper[28149]: I0313 13:05:27.151950 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/fbe355d7-e23b-4062-b0c1-eee76f83db50-dbus-socket\") pod \"nmstate-handler-mgslk\" (UID: \"fbe355d7-e23b-4062-b0c1-eee76f83db50\") " pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:27.163389 master-0 kubenswrapper[28149]: I0313 13:05:27.163217 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/777e9dfe-3c9f-42e0-a52f-46921c2b2035-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-td5jm\" (UID: \"777e9dfe-3c9f-42e0-a52f-46921c2b2035\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-td5jm" Mar 13 13:05:27.180007 master-0 kubenswrapper[28149]: I0313 13:05:27.179947 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r72n4\" (UniqueName: \"kubernetes.io/projected/777e9dfe-3c9f-42e0-a52f-46921c2b2035-kube-api-access-r72n4\") pod \"nmstate-webhook-5f558f5558-td5jm\" (UID: \"777e9dfe-3c9f-42e0-a52f-46921c2b2035\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-td5jm" Mar 13 13:05:27.212474 master-0 kubenswrapper[28149]: I0313 13:05:27.210204 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh"] Mar 13 13:05:27.218209 master-0 kubenswrapper[28149]: I0313 13:05:27.215205 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh" Mar 13 13:05:27.218209 master-0 kubenswrapper[28149]: I0313 13:05:27.218015 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 13 13:05:27.219318 master-0 kubenswrapper[28149]: I0313 13:05:27.218226 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 13 13:05:27.219318 master-0 kubenswrapper[28149]: I0313 13:05:27.218932 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh"] Mar 13 13:05:27.264334 master-0 kubenswrapper[28149]: I0313 13:05:27.253461 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wn7x\" (UniqueName: \"kubernetes.io/projected/1af7d029-c58d-4b3e-99e4-841ac6e7ca3b-kube-api-access-4wn7x\") pod \"nmstate-console-plugin-86f58fcf4-cxjnh\" (UID: \"1af7d029-c58d-4b3e-99e4-841ac6e7ca3b\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh" Mar 13 13:05:27.264334 master-0 kubenswrapper[28149]: I0313 13:05:27.253593 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/fbe355d7-e23b-4062-b0c1-eee76f83db50-ovs-socket\") pod \"nmstate-handler-mgslk\" (UID: \"fbe355d7-e23b-4062-b0c1-eee76f83db50\") " pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:27.264334 master-0 kubenswrapper[28149]: I0313 13:05:27.253702 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/fbe355d7-e23b-4062-b0c1-eee76f83db50-ovs-socket\") pod \"nmstate-handler-mgslk\" (UID: \"fbe355d7-e23b-4062-b0c1-eee76f83db50\") " pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:27.264334 master-0 kubenswrapper[28149]: I0313 13:05:27.253764 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc67t\" (UniqueName: \"kubernetes.io/projected/fbe355d7-e23b-4062-b0c1-eee76f83db50-kube-api-access-zc67t\") pod \"nmstate-handler-mgslk\" (UID: \"fbe355d7-e23b-4062-b0c1-eee76f83db50\") " pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:27.264334 master-0 kubenswrapper[28149]: I0313 13:05:27.253825 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/fbe355d7-e23b-4062-b0c1-eee76f83db50-nmstate-lock\") pod \"nmstate-handler-mgslk\" (UID: \"fbe355d7-e23b-4062-b0c1-eee76f83db50\") " pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:27.264334 master-0 kubenswrapper[28149]: I0313 13:05:27.253867 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5wcq\" (UniqueName: \"kubernetes.io/projected/923ffd1d-4494-4d25-bc3c-fdfa84a296e5-kube-api-access-g5wcq\") pod \"nmstate-metrics-9b8c8685d-phpcs\" (UID: \"923ffd1d-4494-4d25-bc3c-fdfa84a296e5\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-phpcs" Mar 13 13:05:27.264334 master-0 kubenswrapper[28149]: I0313 13:05:27.253903 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1af7d029-c58d-4b3e-99e4-841ac6e7ca3b-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-cxjnh\" (UID: \"1af7d029-c58d-4b3e-99e4-841ac6e7ca3b\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh" Mar 13 13:05:27.264334 master-0 kubenswrapper[28149]: I0313 13:05:27.253952 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/fbe355d7-e23b-4062-b0c1-eee76f83db50-dbus-socket\") pod \"nmstate-handler-mgslk\" (UID: \"fbe355d7-e23b-4062-b0c1-eee76f83db50\") " pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:27.264334 master-0 kubenswrapper[28149]: I0313 13:05:27.254009 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1af7d029-c58d-4b3e-99e4-841ac6e7ca3b-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-cxjnh\" (UID: \"1af7d029-c58d-4b3e-99e4-841ac6e7ca3b\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh" Mar 13 13:05:27.264334 master-0 kubenswrapper[28149]: I0313 13:05:27.254302 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/fbe355d7-e23b-4062-b0c1-eee76f83db50-nmstate-lock\") pod \"nmstate-handler-mgslk\" (UID: \"fbe355d7-e23b-4062-b0c1-eee76f83db50\") " pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:27.264334 master-0 kubenswrapper[28149]: I0313 13:05:27.254450 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/fbe355d7-e23b-4062-b0c1-eee76f83db50-dbus-socket\") pod \"nmstate-handler-mgslk\" (UID: \"fbe355d7-e23b-4062-b0c1-eee76f83db50\") " pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:27.280197 master-0 kubenswrapper[28149]: I0313 13:05:27.272603 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc67t\" (UniqueName: \"kubernetes.io/projected/fbe355d7-e23b-4062-b0c1-eee76f83db50-kube-api-access-zc67t\") pod \"nmstate-handler-mgslk\" (UID: \"fbe355d7-e23b-4062-b0c1-eee76f83db50\") " pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:27.280197 master-0 kubenswrapper[28149]: I0313 13:05:27.273947 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5wcq\" (UniqueName: \"kubernetes.io/projected/923ffd1d-4494-4d25-bc3c-fdfa84a296e5-kube-api-access-g5wcq\") pod \"nmstate-metrics-9b8c8685d-phpcs\" (UID: \"923ffd1d-4494-4d25-bc3c-fdfa84a296e5\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-phpcs" Mar 13 13:05:27.368239 master-0 kubenswrapper[28149]: I0313 13:05:27.355687 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1af7d029-c58d-4b3e-99e4-841ac6e7ca3b-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-cxjnh\" (UID: \"1af7d029-c58d-4b3e-99e4-841ac6e7ca3b\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh" Mar 13 13:05:27.368239 master-0 kubenswrapper[28149]: I0313 13:05:27.360090 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1af7d029-c58d-4b3e-99e4-841ac6e7ca3b-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-cxjnh\" (UID: \"1af7d029-c58d-4b3e-99e4-841ac6e7ca3b\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh" Mar 13 13:05:27.368239 master-0 kubenswrapper[28149]: I0313 13:05:27.360238 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wn7x\" (UniqueName: \"kubernetes.io/projected/1af7d029-c58d-4b3e-99e4-841ac6e7ca3b-kube-api-access-4wn7x\") pod \"nmstate-console-plugin-86f58fcf4-cxjnh\" (UID: \"1af7d029-c58d-4b3e-99e4-841ac6e7ca3b\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh" Mar 13 13:05:27.368239 master-0 kubenswrapper[28149]: I0313 13:05:27.361937 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1af7d029-c58d-4b3e-99e4-841ac6e7ca3b-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-cxjnh\" (UID: \"1af7d029-c58d-4b3e-99e4-841ac6e7ca3b\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh" Mar 13 13:05:27.368239 master-0 kubenswrapper[28149]: I0313 13:05:27.363982 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1af7d029-c58d-4b3e-99e4-841ac6e7ca3b-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-cxjnh\" (UID: \"1af7d029-c58d-4b3e-99e4-841ac6e7ca3b\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh" Mar 13 13:05:27.400170 master-0 kubenswrapper[28149]: I0313 13:05:27.393687 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wn7x\" (UniqueName: \"kubernetes.io/projected/1af7d029-c58d-4b3e-99e4-841ac6e7ca3b-kube-api-access-4wn7x\") pod \"nmstate-console-plugin-86f58fcf4-cxjnh\" (UID: \"1af7d029-c58d-4b3e-99e4-841ac6e7ca3b\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh" Mar 13 13:05:27.410898 master-0 kubenswrapper[28149]: I0313 13:05:27.410819 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-95f5bc659-s4pq8"] Mar 13 13:05:27.412422 master-0 kubenswrapper[28149]: I0313 13:05:27.412369 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.437909 master-0 kubenswrapper[28149]: I0313 13:05:27.431466 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-95f5bc659-s4pq8"] Mar 13 13:05:27.437909 master-0 kubenswrapper[28149]: I0313 13:05:27.433693 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-td5jm" Mar 13 13:05:27.464321 master-0 kubenswrapper[28149]: I0313 13:05:27.463521 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80c35670-5d39-4184-8592-b784d21d10a3-console-serving-cert\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.464321 master-0 kubenswrapper[28149]: I0313 13:05:27.463581 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80c35670-5d39-4184-8592-b784d21d10a3-console-oauth-config\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.464321 master-0 kubenswrapper[28149]: I0313 13:05:27.463672 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77hz2\" (UniqueName: \"kubernetes.io/projected/80c35670-5d39-4184-8592-b784d21d10a3-kube-api-access-77hz2\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.464321 master-0 kubenswrapper[28149]: I0313 13:05:27.463709 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80c35670-5d39-4184-8592-b784d21d10a3-trusted-ca-bundle\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.464321 master-0 kubenswrapper[28149]: I0313 13:05:27.463743 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80c35670-5d39-4184-8592-b784d21d10a3-service-ca\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.464321 master-0 kubenswrapper[28149]: I0313 13:05:27.463998 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80c35670-5d39-4184-8592-b784d21d10a3-console-config\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.464321 master-0 kubenswrapper[28149]: I0313 13:05:27.464168 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80c35670-5d39-4184-8592-b784d21d10a3-oauth-serving-cert\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.505121 master-0 kubenswrapper[28149]: I0313 13:05:27.503575 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-phpcs" Mar 13 13:05:27.519185 master-0 kubenswrapper[28149]: I0313 13:05:27.518070 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:27.553122 master-0 kubenswrapper[28149]: I0313 13:05:27.553051 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh" Mar 13 13:05:27.575650 master-0 kubenswrapper[28149]: I0313 13:05:27.575103 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77hz2\" (UniqueName: \"kubernetes.io/projected/80c35670-5d39-4184-8592-b784d21d10a3-kube-api-access-77hz2\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.575650 master-0 kubenswrapper[28149]: I0313 13:05:27.575214 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80c35670-5d39-4184-8592-b784d21d10a3-trusted-ca-bundle\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.575650 master-0 kubenswrapper[28149]: I0313 13:05:27.575258 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80c35670-5d39-4184-8592-b784d21d10a3-service-ca\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.575650 master-0 kubenswrapper[28149]: I0313 13:05:27.575344 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80c35670-5d39-4184-8592-b784d21d10a3-console-config\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.575650 master-0 kubenswrapper[28149]: I0313 13:05:27.575414 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80c35670-5d39-4184-8592-b784d21d10a3-oauth-serving-cert\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.575650 master-0 kubenswrapper[28149]: I0313 13:05:27.575469 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80c35670-5d39-4184-8592-b784d21d10a3-console-serving-cert\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.575650 master-0 kubenswrapper[28149]: I0313 13:05:27.575504 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80c35670-5d39-4184-8592-b784d21d10a3-console-oauth-config\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.580078 master-0 kubenswrapper[28149]: I0313 13:05:27.578346 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80c35670-5d39-4184-8592-b784d21d10a3-oauth-serving-cert\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.580078 master-0 kubenswrapper[28149]: I0313 13:05:27.579285 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80c35670-5d39-4184-8592-b784d21d10a3-trusted-ca-bundle\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.580078 master-0 kubenswrapper[28149]: I0313 13:05:27.579321 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80c35670-5d39-4184-8592-b784d21d10a3-console-config\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.582353 master-0 kubenswrapper[28149]: I0313 13:05:27.582244 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80c35670-5d39-4184-8592-b784d21d10a3-console-serving-cert\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.588706 master-0 kubenswrapper[28149]: I0313 13:05:27.588662 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80c35670-5d39-4184-8592-b784d21d10a3-service-ca\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.614434 master-0 kubenswrapper[28149]: I0313 13:05:27.611708 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80c35670-5d39-4184-8592-b784d21d10a3-console-oauth-config\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.624450 master-0 kubenswrapper[28149]: I0313 13:05:27.622361 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-psd47" event={"ID":"13c11fb6-9816-4c99-9acb-0cf5d8249219","Type":"ContainerStarted","Data":"c8424833487ec7f3ab12a1c50da49c1a55cd731765a81d25b76dbbd6e7c62b76"} Mar 13 13:05:27.624450 master-0 kubenswrapper[28149]: I0313 13:05:27.622408 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-psd47" event={"ID":"13c11fb6-9816-4c99-9acb-0cf5d8249219","Type":"ContainerStarted","Data":"2edbb646a7aba1bcd6e3d6955e999e67ae7da26b9e97646133985a92f26e6717"} Mar 13 13:05:27.646709 master-0 kubenswrapper[28149]: I0313 13:05:27.646388 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77hz2\" (UniqueName: \"kubernetes.io/projected/80c35670-5d39-4184-8592-b784d21d10a3-kube-api-access-77hz2\") pod \"console-95f5bc659-s4pq8\" (UID: \"80c35670-5d39-4184-8592-b784d21d10a3\") " pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:27.659858 master-0 kubenswrapper[28149]: W0313 13:05:27.659794 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbe355d7_e23b_4062_b0c1_eee76f83db50.slice/crio-a08fd8a83414d2a8cc823b48c6e38ae7970e1f1d88dbb02086419012c3ba082f WatchSource:0}: Error finding container a08fd8a83414d2a8cc823b48c6e38ae7970e1f1d88dbb02086419012c3ba082f: Status 404 returned error can't find the container with id a08fd8a83414d2a8cc823b48c6e38ae7970e1f1d88dbb02086419012c3ba082f Mar 13 13:05:27.746107 master-0 kubenswrapper[28149]: I0313 13:05:27.744921 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:28.022620 master-0 kubenswrapper[28149]: I0313 13:05:28.022536 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-td5jm"] Mar 13 13:05:28.030025 master-0 kubenswrapper[28149]: W0313 13:05:28.029980 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod777e9dfe_3c9f_42e0_a52f_46921c2b2035.slice/crio-0e7d9017d5bd0ae62c54175c81c31e6146f2c64e42991651bcbfcadd5e994693 WatchSource:0}: Error finding container 0e7d9017d5bd0ae62c54175c81c31e6146f2c64e42991651bcbfcadd5e994693: Status 404 returned error can't find the container with id 0e7d9017d5bd0ae62c54175c81c31e6146f2c64e42991651bcbfcadd5e994693 Mar 13 13:05:28.347736 master-0 kubenswrapper[28149]: I0313 13:05:28.347697 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-phpcs"] Mar 13 13:05:28.352223 master-0 kubenswrapper[28149]: W0313 13:05:28.352171 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod923ffd1d_4494_4d25_bc3c_fdfa84a296e5.slice/crio-fb85c2ec67d3b7fd0fb0f754b556d78c635f4c134bf5e7f6d55b326a4f18cedd WatchSource:0}: Error finding container fb85c2ec67d3b7fd0fb0f754b556d78c635f4c134bf5e7f6d55b326a4f18cedd: Status 404 returned error can't find the container with id fb85c2ec67d3b7fd0fb0f754b556d78c635f4c134bf5e7f6d55b326a4f18cedd Mar 13 13:05:28.364307 master-0 kubenswrapper[28149]: I0313 13:05:28.363232 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh"] Mar 13 13:05:28.369499 master-0 kubenswrapper[28149]: I0313 13:05:28.369453 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-95f5bc659-s4pq8"] Mar 13 13:05:28.642187 master-0 kubenswrapper[28149]: I0313 13:05:28.641675 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-td5jm" event={"ID":"777e9dfe-3c9f-42e0-a52f-46921c2b2035","Type":"ContainerStarted","Data":"0e7d9017d5bd0ae62c54175c81c31e6146f2c64e42991651bcbfcadd5e994693"} Mar 13 13:05:28.644480 master-0 kubenswrapper[28149]: I0313 13:05:28.644438 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh" event={"ID":"1af7d029-c58d-4b3e-99e4-841ac6e7ca3b","Type":"ContainerStarted","Data":"7b73625f16cfd482a207b93644c222caa7e9df04f685a89dcf5f092a3c3d60ed"} Mar 13 13:05:28.645915 master-0 kubenswrapper[28149]: I0313 13:05:28.645835 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-phpcs" event={"ID":"923ffd1d-4494-4d25-bc3c-fdfa84a296e5","Type":"ContainerStarted","Data":"fb85c2ec67d3b7fd0fb0f754b556d78c635f4c134bf5e7f6d55b326a4f18cedd"} Mar 13 13:05:28.651574 master-0 kubenswrapper[28149]: I0313 13:05:28.651530 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-95f5bc659-s4pq8" event={"ID":"80c35670-5d39-4184-8592-b784d21d10a3","Type":"ContainerStarted","Data":"cd9994ea808684378111dbbd3f2c014115962cde377f39e94241cb87acf72270"} Mar 13 13:05:28.651574 master-0 kubenswrapper[28149]: I0313 13:05:28.651571 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-95f5bc659-s4pq8" event={"ID":"80c35670-5d39-4184-8592-b784d21d10a3","Type":"ContainerStarted","Data":"62ceb8b4ddbd58549020126aec9d29151262f2e4f03c67525512070b998dd61f"} Mar 13 13:05:28.654672 master-0 kubenswrapper[28149]: I0313 13:05:28.654634 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mgslk" event={"ID":"fbe355d7-e23b-4062-b0c1-eee76f83db50","Type":"ContainerStarted","Data":"a08fd8a83414d2a8cc823b48c6e38ae7970e1f1d88dbb02086419012c3ba082f"} Mar 13 13:05:28.673674 master-0 kubenswrapper[28149]: I0313 13:05:28.673549 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-95f5bc659-s4pq8" podStartSLOduration=1.673527274 podStartE2EDuration="1.673527274s" podCreationTimestamp="2026-03-13 13:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:05:28.672510485 +0000 UTC m=+702.325975644" watchObservedRunningTime="2026-03-13 13:05:28.673527274 +0000 UTC m=+702.326992433" Mar 13 13:05:35.781844 master-0 kubenswrapper[28149]: I0313 13:05:35.781705 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mgslk" event={"ID":"fbe355d7-e23b-4062-b0c1-eee76f83db50","Type":"ContainerStarted","Data":"1e5178327a1b2df7339c9fe9be8cdf9330a2d6cd26de83c16d1913b236dae8bb"} Mar 13 13:05:35.781844 master-0 kubenswrapper[28149]: I0313 13:05:35.781829 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:35.783811 master-0 kubenswrapper[28149]: I0313 13:05:35.783725 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7" event={"ID":"6ff627b0-a4b7-4741-9934-afc226d587b8","Type":"ContainerStarted","Data":"d0b0384d71fd35110c0df473faa3c325e57338f3c44042b761678ec6e934cfa2"} Mar 13 13:05:35.783982 master-0 kubenswrapper[28149]: I0313 13:05:35.783820 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7" Mar 13 13:05:35.787344 master-0 kubenswrapper[28149]: I0313 13:05:35.786808 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-td5jm" event={"ID":"777e9dfe-3c9f-42e0-a52f-46921c2b2035","Type":"ContainerStarted","Data":"cb39832ac1865ca71d668d2b4c0f163e111faf7a486708e7b5bb5a590bf7bc88"} Mar 13 13:05:35.787344 master-0 kubenswrapper[28149]: I0313 13:05:35.786917 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-td5jm" Mar 13 13:05:35.789295 master-0 kubenswrapper[28149]: I0313 13:05:35.788862 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh" event={"ID":"1af7d029-c58d-4b3e-99e4-841ac6e7ca3b","Type":"ContainerStarted","Data":"b2b084b963bce8f3d89b483553e77a57501239c3e8467145ed842c641851003a"} Mar 13 13:05:35.792318 master-0 kubenswrapper[28149]: I0313 13:05:35.792259 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-psd47" event={"ID":"13c11fb6-9816-4c99-9acb-0cf5d8249219","Type":"ContainerStarted","Data":"e10533ecae5cae59cb552553b4a32638095f86999b271a397c6e07c53dd8b628"} Mar 13 13:05:35.794351 master-0 kubenswrapper[28149]: I0313 13:05:35.794052 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-psd47" Mar 13 13:05:35.797127 master-0 kubenswrapper[28149]: I0313 13:05:35.797078 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-phpcs" event={"ID":"923ffd1d-4494-4d25-bc3c-fdfa84a296e5","Type":"ContainerStarted","Data":"a90bc8ee694a5c506f8e021bd0eaaacbbfc095e7ba8ddf65b5594d64b119d1c6"} Mar 13 13:05:35.797257 master-0 kubenswrapper[28149]: I0313 13:05:35.797161 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-phpcs" event={"ID":"923ffd1d-4494-4d25-bc3c-fdfa84a296e5","Type":"ContainerStarted","Data":"d7d13b6422da2cb7d647510e6e59eed32292ad62050bfcce6badc9ebadc89d9a"} Mar 13 13:05:35.806555 master-0 kubenswrapper[28149]: I0313 13:05:35.803556 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-qlkcr" event={"ID":"b6261d07-1e69-4080-a219-f07f8d607f07","Type":"ContainerStarted","Data":"74f1a5b1fa2abed65d4760997ccb18217c61cb1f4f1b3917a50262bbd3775c8c"} Mar 13 13:05:35.807572 master-0 kubenswrapper[28149]: I0313 13:05:35.807517 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-qlkcr" Mar 13 13:05:36.090239 master-0 kubenswrapper[28149]: I0313 13:05:36.090124 28149 generic.go:334] "Generic (PLEG): container finished" podID="98a87277-3308-4819-8cad-1f0c2d5d97e1" containerID="da79f88cfc2fe3fbb73e2fcd1cb4a23ee337aab6d1c81d31d29edaf7ba363fcb" exitCode=0 Mar 13 13:05:36.090239 master-0 kubenswrapper[28149]: I0313 13:05:36.090197 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb9kf" event={"ID":"98a87277-3308-4819-8cad-1f0c2d5d97e1","Type":"ContainerDied","Data":"da79f88cfc2fe3fbb73e2fcd1cb4a23ee337aab6d1c81d31d29edaf7ba363fcb"} Mar 13 13:05:36.115565 master-0 kubenswrapper[28149]: I0313 13:05:36.115499 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-mgslk" podStartSLOduration=3.41052187 podStartE2EDuration="10.115463001s" podCreationTimestamp="2026-03-13 13:05:26 +0000 UTC" firstStartedPulling="2026-03-13 13:05:27.681055605 +0000 UTC m=+701.334520764" lastFinishedPulling="2026-03-13 13:05:34.385996736 +0000 UTC m=+708.039461895" observedRunningTime="2026-03-13 13:05:35.80281965 +0000 UTC m=+709.456284809" watchObservedRunningTime="2026-03-13 13:05:36.115463001 +0000 UTC m=+709.768928230" Mar 13 13:05:36.139211 master-0 kubenswrapper[28149]: I0313 13:05:36.137393 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cxjnh" podStartSLOduration=2.976905275 podStartE2EDuration="9.137365145s" podCreationTimestamp="2026-03-13 13:05:27 +0000 UTC" firstStartedPulling="2026-03-13 13:05:28.350448674 +0000 UTC m=+702.003913833" lastFinishedPulling="2026-03-13 13:05:34.510908544 +0000 UTC m=+708.164373703" observedRunningTime="2026-03-13 13:05:36.134059694 +0000 UTC m=+709.787524853" watchObservedRunningTime="2026-03-13 13:05:36.137365145 +0000 UTC m=+709.790830304" Mar 13 13:05:36.437198 master-0 kubenswrapper[28149]: I0313 13:05:36.436636 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-td5jm" podStartSLOduration=3.963047663 podStartE2EDuration="10.436619866s" podCreationTimestamp="2026-03-13 13:05:26 +0000 UTC" firstStartedPulling="2026-03-13 13:05:28.033764392 +0000 UTC m=+701.687229551" lastFinishedPulling="2026-03-13 13:05:34.507336595 +0000 UTC m=+708.160801754" observedRunningTime="2026-03-13 13:05:36.43565715 +0000 UTC m=+710.089122309" watchObservedRunningTime="2026-03-13 13:05:36.436619866 +0000 UTC m=+710.090085025" Mar 13 13:05:36.469238 master-0 kubenswrapper[28149]: I0313 13:05:36.469103 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-psd47" podStartSLOduration=4.526222882 podStartE2EDuration="11.469085722s" podCreationTimestamp="2026-03-13 13:05:25 +0000 UTC" firstStartedPulling="2026-03-13 13:05:27.516652275 +0000 UTC m=+701.170117434" lastFinishedPulling="2026-03-13 13:05:34.459515115 +0000 UTC m=+708.112980274" observedRunningTime="2026-03-13 13:05:36.458880671 +0000 UTC m=+710.112345830" watchObservedRunningTime="2026-03-13 13:05:36.469085722 +0000 UTC m=+710.122550881" Mar 13 13:05:36.492506 master-0 kubenswrapper[28149]: I0313 13:05:36.492443 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7" podStartSLOduration=3.747242211 podStartE2EDuration="12.492421097s" podCreationTimestamp="2026-03-13 13:05:24 +0000 UTC" firstStartedPulling="2026-03-13 13:05:25.650912879 +0000 UTC m=+699.304378038" lastFinishedPulling="2026-03-13 13:05:34.396091765 +0000 UTC m=+708.049556924" observedRunningTime="2026-03-13 13:05:36.482088252 +0000 UTC m=+710.135553411" watchObservedRunningTime="2026-03-13 13:05:36.492421097 +0000 UTC m=+710.145886256" Mar 13 13:05:36.540194 master-0 kubenswrapper[28149]: I0313 13:05:36.536921 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-phpcs" podStartSLOduration=4.388170539 podStartE2EDuration="10.536899145s" podCreationTimestamp="2026-03-13 13:05:26 +0000 UTC" firstStartedPulling="2026-03-13 13:05:28.359267378 +0000 UTC m=+702.012732537" lastFinishedPulling="2026-03-13 13:05:34.507995984 +0000 UTC m=+708.161461143" observedRunningTime="2026-03-13 13:05:36.500653794 +0000 UTC m=+710.154118953" watchObservedRunningTime="2026-03-13 13:05:36.536899145 +0000 UTC m=+710.190364294" Mar 13 13:05:36.575163 master-0 kubenswrapper[28149]: I0313 13:05:36.568018 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-qlkcr" podStartSLOduration=3.086431445 podStartE2EDuration="11.567983433s" podCreationTimestamp="2026-03-13 13:05:25 +0000 UTC" firstStartedPulling="2026-03-13 13:05:26.029005337 +0000 UTC m=+699.682470496" lastFinishedPulling="2026-03-13 13:05:34.510557325 +0000 UTC m=+708.164022484" observedRunningTime="2026-03-13 13:05:36.548565697 +0000 UTC m=+710.202030876" watchObservedRunningTime="2026-03-13 13:05:36.567983433 +0000 UTC m=+710.221448592" Mar 13 13:05:37.344176 master-0 kubenswrapper[28149]: I0313 13:05:37.342661 28149 generic.go:334] "Generic (PLEG): container finished" podID="98a87277-3308-4819-8cad-1f0c2d5d97e1" containerID="36aa474453f802fdb9285910cf757eb24846305feb2f936923cee00772e3fb05" exitCode=0 Mar 13 13:05:37.344176 master-0 kubenswrapper[28149]: I0313 13:05:37.342820 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb9kf" event={"ID":"98a87277-3308-4819-8cad-1f0c2d5d97e1","Type":"ContainerDied","Data":"36aa474453f802fdb9285910cf757eb24846305feb2f936923cee00772e3fb05"} Mar 13 13:05:37.350891 master-0 kubenswrapper[28149]: I0313 13:05:37.348480 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-psd47" Mar 13 13:05:37.350891 master-0 kubenswrapper[28149]: I0313 13:05:37.350673 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-qlkcr" Mar 13 13:05:37.836277 master-0 kubenswrapper[28149]: I0313 13:05:37.836231 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:37.836277 master-0 kubenswrapper[28149]: I0313 13:05:37.836285 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:37.861800 master-0 kubenswrapper[28149]: I0313 13:05:37.858838 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:38.228308 master-0 kubenswrapper[28149]: E0313 13:05:38.227973 28149 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98a87277_3308_4819_8cad_1f0c2d5d97e1.slice/crio-e60eeb5b759daa8bb2ddb156c2e214e44e8a41528d8a83b86874c6c7942671d9.scope\": RecentStats: unable to find data in memory cache]" Mar 13 13:05:38.258520 master-0 kubenswrapper[28149]: E0313 13:05:38.256579 28149 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98a87277_3308_4819_8cad_1f0c2d5d97e1.slice/crio-e60eeb5b759daa8bb2ddb156c2e214e44e8a41528d8a83b86874c6c7942671d9.scope\": RecentStats: unable to find data in memory cache]" Mar 13 13:05:38.353154 master-0 kubenswrapper[28149]: I0313 13:05:38.353101 28149 generic.go:334] "Generic (PLEG): container finished" podID="98a87277-3308-4819-8cad-1f0c2d5d97e1" containerID="e60eeb5b759daa8bb2ddb156c2e214e44e8a41528d8a83b86874c6c7942671d9" exitCode=0 Mar 13 13:05:38.353652 master-0 kubenswrapper[28149]: I0313 13:05:38.353180 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb9kf" event={"ID":"98a87277-3308-4819-8cad-1f0c2d5d97e1","Type":"ContainerDied","Data":"e60eeb5b759daa8bb2ddb156c2e214e44e8a41528d8a83b86874c6c7942671d9"} Mar 13 13:05:38.358746 master-0 kubenswrapper[28149]: I0313 13:05:38.358704 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-95f5bc659-s4pq8" Mar 13 13:05:38.464113 master-0 kubenswrapper[28149]: I0313 13:05:38.463581 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7c4766b9db-6tc2q"] Mar 13 13:05:39.371826 master-0 kubenswrapper[28149]: I0313 13:05:39.371736 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb9kf" event={"ID":"98a87277-3308-4819-8cad-1f0c2d5d97e1","Type":"ContainerStarted","Data":"6e2532949fb17fd2d8063871735491878fa6bdae2d990af0a4a8c703ee9f89af"} Mar 13 13:05:39.371826 master-0 kubenswrapper[28149]: I0313 13:05:39.371799 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb9kf" event={"ID":"98a87277-3308-4819-8cad-1f0c2d5d97e1","Type":"ContainerStarted","Data":"7740b55475a9509792693ff8b7f1fc26a0acbdddd878cbba7c9dd4d551bc122b"} Mar 13 13:05:40.385933 master-0 kubenswrapper[28149]: I0313 13:05:40.385814 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb9kf" event={"ID":"98a87277-3308-4819-8cad-1f0c2d5d97e1","Type":"ContainerStarted","Data":"343dea91cd0a4267dbbacc56bfae012b8db6e5679ea8971ee94b1757c38fafff"} Mar 13 13:05:40.385933 master-0 kubenswrapper[28149]: I0313 13:05:40.385872 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb9kf" event={"ID":"98a87277-3308-4819-8cad-1f0c2d5d97e1","Type":"ContainerStarted","Data":"3ee0a9977da7c8f1fb2ebd1119d37bcf00c5fe0d14bc09b10c01667ee1f66d70"} Mar 13 13:05:40.385933 master-0 kubenswrapper[28149]: I0313 13:05:40.385885 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb9kf" event={"ID":"98a87277-3308-4819-8cad-1f0c2d5d97e1","Type":"ContainerStarted","Data":"68d238e1079ab4418447924d505ac435ae687192e55513549bfb2ee9225ca08f"} Mar 13 13:05:40.385933 master-0 kubenswrapper[28149]: I0313 13:05:40.385895 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb9kf" event={"ID":"98a87277-3308-4819-8cad-1f0c2d5d97e1","Type":"ContainerStarted","Data":"7c471dbdc35c2b64f0580bcbf511e2376825ddd4352c09ac2c6c6df082d0d7ce"} Mar 13 13:05:40.386808 master-0 kubenswrapper[28149]: I0313 13:05:40.386779 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:40.476135 master-0 kubenswrapper[28149]: I0313 13:05:40.476045 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-qb9kf" podStartSLOduration=7.348045809 podStartE2EDuration="16.476023082s" podCreationTimestamp="2026-03-13 13:05:24 +0000 UTC" firstStartedPulling="2026-03-13 13:05:25.365453338 +0000 UTC m=+699.018918497" lastFinishedPulling="2026-03-13 13:05:34.493430611 +0000 UTC m=+708.146895770" observedRunningTime="2026-03-13 13:05:40.472416003 +0000 UTC m=+714.125881162" watchObservedRunningTime="2026-03-13 13:05:40.476023082 +0000 UTC m=+714.129488251" Mar 13 13:05:42.541661 master-0 kubenswrapper[28149]: I0313 13:05:42.541620 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-mgslk" Mar 13 13:05:45.186287 master-0 kubenswrapper[28149]: I0313 13:05:45.186231 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:45.188085 master-0 kubenswrapper[28149]: I0313 13:05:45.188049 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-v5st7" Mar 13 13:05:45.251613 master-0 kubenswrapper[28149]: I0313 13:05:45.250992 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:47.454436 master-0 kubenswrapper[28149]: I0313 13:05:47.454374 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-td5jm" Mar 13 13:05:51.112790 master-0 kubenswrapper[28149]: I0313 13:05:51.112678 28149 scope.go:117] "RemoveContainer" containerID="62a39b62dd321a9a78aa93cc0dbace3d5275bb08e7d86c7913fc8df6b17cff3f" Mar 13 13:05:53.432175 master-0 kubenswrapper[28149]: I0313 13:05:53.431104 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-j7btw"] Mar 13 13:05:53.432783 master-0 kubenswrapper[28149]: I0313 13:05:53.432443 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.434800 master-0 kubenswrapper[28149]: I0313 13:05:53.434745 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 13 13:05:53.472180 master-0 kubenswrapper[28149]: I0313 13:05:53.471657 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-j7btw"] Mar 13 13:05:53.634630 master-0 kubenswrapper[28149]: I0313 13:05:53.634567 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-pod-volumes-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.634630 master-0 kubenswrapper[28149]: I0313 13:05:53.634640 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-device-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.634934 master-0 kubenswrapper[28149]: I0313 13:05:53.634675 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-file-lock-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.634934 master-0 kubenswrapper[28149]: I0313 13:05:53.634712 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqblz\" (UniqueName: \"kubernetes.io/projected/34dd0618-e469-4484-969b-358915b354a1-kube-api-access-rqblz\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.634934 master-0 kubenswrapper[28149]: I0313 13:05:53.634735 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/34dd0618-e469-4484-969b-358915b354a1-metrics-cert\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.634934 master-0 kubenswrapper[28149]: I0313 13:05:53.634773 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-node-plugin-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.634934 master-0 kubenswrapper[28149]: I0313 13:05:53.634839 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-lvmd-config\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.634934 master-0 kubenswrapper[28149]: I0313 13:05:53.634884 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-csi-plugin-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.634934 master-0 kubenswrapper[28149]: I0313 13:05:53.634912 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-sys\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.635278 master-0 kubenswrapper[28149]: I0313 13:05:53.634953 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-registration-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.635278 master-0 kubenswrapper[28149]: I0313 13:05:53.635072 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-run-udev\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.736794 master-0 kubenswrapper[28149]: I0313 13:05:53.736653 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-node-plugin-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.736794 master-0 kubenswrapper[28149]: I0313 13:05:53.736734 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-lvmd-config\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.736794 master-0 kubenswrapper[28149]: I0313 13:05:53.736771 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-csi-plugin-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.736794 master-0 kubenswrapper[28149]: I0313 13:05:53.736796 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-sys\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.737189 master-0 kubenswrapper[28149]: I0313 13:05:53.736834 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-registration-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.737189 master-0 kubenswrapper[28149]: I0313 13:05:53.736918 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-run-udev\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.737189 master-0 kubenswrapper[28149]: I0313 13:05:53.736994 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-node-plugin-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.737189 master-0 kubenswrapper[28149]: I0313 13:05:53.737067 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-sys\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.737189 master-0 kubenswrapper[28149]: I0313 13:05:53.737128 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-csi-plugin-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.737189 master-0 kubenswrapper[28149]: I0313 13:05:53.737173 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-registration-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.737525 master-0 kubenswrapper[28149]: I0313 13:05:53.737229 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-pod-volumes-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.737525 master-0 kubenswrapper[28149]: I0313 13:05:53.737250 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-run-udev\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.737525 master-0 kubenswrapper[28149]: I0313 13:05:53.737294 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-lvmd-config\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.737525 master-0 kubenswrapper[28149]: I0313 13:05:53.737325 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-pod-volumes-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.737525 master-0 kubenswrapper[28149]: I0313 13:05:53.737432 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-device-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.737525 master-0 kubenswrapper[28149]: I0313 13:05:53.737472 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-file-lock-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.738588 master-0 kubenswrapper[28149]: I0313 13:05:53.737517 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqblz\" (UniqueName: \"kubernetes.io/projected/34dd0618-e469-4484-969b-358915b354a1-kube-api-access-rqblz\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.738588 master-0 kubenswrapper[28149]: I0313 13:05:53.737969 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/34dd0618-e469-4484-969b-358915b354a1-metrics-cert\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.738588 master-0 kubenswrapper[28149]: I0313 13:05:53.737554 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-device-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.738588 master-0 kubenswrapper[28149]: I0313 13:05:53.737772 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/34dd0618-e469-4484-969b-358915b354a1-file-lock-dir\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.741834 master-0 kubenswrapper[28149]: I0313 13:05:53.741762 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/34dd0618-e469-4484-969b-358915b354a1-metrics-cert\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.759855 master-0 kubenswrapper[28149]: I0313 13:05:53.759781 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqblz\" (UniqueName: \"kubernetes.io/projected/34dd0618-e469-4484-969b-358915b354a1-kube-api-access-rqblz\") pod \"vg-manager-j7btw\" (UID: \"34dd0618-e469-4484-969b-358915b354a1\") " pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:53.789544 master-0 kubenswrapper[28149]: I0313 13:05:53.789466 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-j7btw" Mar 13 13:05:54.338504 master-0 kubenswrapper[28149]: W0313 13:05:54.338452 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34dd0618_e469_4484_969b_358915b354a1.slice/crio-881f3c1b1a1cc4f58b6fd8db11b0a3a8ba360e6a76dd98bbd0ab23ba809d786c WatchSource:0}: Error finding container 881f3c1b1a1cc4f58b6fd8db11b0a3a8ba360e6a76dd98bbd0ab23ba809d786c: Status 404 returned error can't find the container with id 881f3c1b1a1cc4f58b6fd8db11b0a3a8ba360e6a76dd98bbd0ab23ba809d786c Mar 13 13:05:54.343092 master-0 kubenswrapper[28149]: I0313 13:05:54.343045 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-j7btw"] Mar 13 13:05:54.547413 master-0 kubenswrapper[28149]: I0313 13:05:54.547338 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-j7btw" event={"ID":"34dd0618-e469-4484-969b-358915b354a1","Type":"ContainerStarted","Data":"6c16fb14cc637320638cd2f23fdbfe4ad50ef46ec7368295c7a109139c27daa8"} Mar 13 13:05:54.547413 master-0 kubenswrapper[28149]: I0313 13:05:54.547409 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-j7btw" event={"ID":"34dd0618-e469-4484-969b-358915b354a1","Type":"ContainerStarted","Data":"881f3c1b1a1cc4f58b6fd8db11b0a3a8ba360e6a76dd98bbd0ab23ba809d786c"} Mar 13 13:05:54.590890 master-0 kubenswrapper[28149]: I0313 13:05:54.590765 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-j7btw" podStartSLOduration=1.5907401650000002 podStartE2EDuration="1.590740165s" podCreationTimestamp="2026-03-13 13:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:05:54.5865782 +0000 UTC m=+728.240043369" watchObservedRunningTime="2026-03-13 13:05:54.590740165 +0000 UTC m=+728.244205324" Mar 13 13:05:55.200565 master-0 kubenswrapper[28149]: I0313 13:05:55.200487 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-qb9kf" Mar 13 13:05:56.571152 master-0 kubenswrapper[28149]: I0313 13:05:56.571092 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-j7btw_34dd0618-e469-4484-969b-358915b354a1/vg-manager/0.log" Mar 13 13:05:56.571903 master-0 kubenswrapper[28149]: I0313 13:05:56.571871 28149 generic.go:334] "Generic (PLEG): container finished" podID="34dd0618-e469-4484-969b-358915b354a1" containerID="6c16fb14cc637320638cd2f23fdbfe4ad50ef46ec7368295c7a109139c27daa8" exitCode=1 Mar 13 13:05:56.572031 master-0 kubenswrapper[28149]: I0313 13:05:56.572010 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-j7btw" event={"ID":"34dd0618-e469-4484-969b-358915b354a1","Type":"ContainerDied","Data":"6c16fb14cc637320638cd2f23fdbfe4ad50ef46ec7368295c7a109139c27daa8"} Mar 13 13:05:56.572792 master-0 kubenswrapper[28149]: I0313 13:05:56.572770 28149 scope.go:117] "RemoveContainer" containerID="6c16fb14cc637320638cd2f23fdbfe4ad50ef46ec7368295c7a109139c27daa8" Mar 13 13:05:56.916607 master-0 kubenswrapper[28149]: I0313 13:05:56.916563 28149 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 13 13:05:57.334222 master-0 kubenswrapper[28149]: I0313 13:05:57.334050 28149 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-13T13:05:56.916846541Z","Handler":null,"Name":""} Mar 13 13:05:57.336833 master-0 kubenswrapper[28149]: I0313 13:05:57.336807 28149 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 13 13:05:57.336973 master-0 kubenswrapper[28149]: I0313 13:05:57.336858 28149 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 13 13:05:57.588131 master-0 kubenswrapper[28149]: I0313 13:05:57.587508 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-j7btw_34dd0618-e469-4484-969b-358915b354a1/vg-manager/0.log" Mar 13 13:05:57.588131 master-0 kubenswrapper[28149]: I0313 13:05:57.587587 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-j7btw" event={"ID":"34dd0618-e469-4484-969b-358915b354a1","Type":"ContainerStarted","Data":"9084d08c83717e352fb62754dcbaec660728f8a9499aae0bf8a5d5ca20ba6c21"} Mar 13 13:05:59.723501 master-0 kubenswrapper[28149]: I0313 13:05:59.723409 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bbk8f"] Mar 13 13:05:59.724547 master-0 kubenswrapper[28149]: I0313 13:05:59.724513 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bbk8f" Mar 13 13:05:59.727273 master-0 kubenswrapper[28149]: I0313 13:05:59.727226 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 13 13:05:59.727633 master-0 kubenswrapper[28149]: I0313 13:05:59.727593 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 13 13:05:59.758166 master-0 kubenswrapper[28149]: I0313 13:05:59.755603 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-852kd\" (UniqueName: \"kubernetes.io/projected/24d5d93d-500e-4fa4-b540-d4e290c2952e-kube-api-access-852kd\") pod \"openstack-operator-index-bbk8f\" (UID: \"24d5d93d-500e-4fa4-b540-d4e290c2952e\") " pod="openstack-operators/openstack-operator-index-bbk8f" Mar 13 13:05:59.781907 master-0 kubenswrapper[28149]: I0313 13:05:59.780457 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bbk8f"] Mar 13 13:05:59.860485 master-0 kubenswrapper[28149]: I0313 13:05:59.860382 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-852kd\" (UniqueName: \"kubernetes.io/projected/24d5d93d-500e-4fa4-b540-d4e290c2952e-kube-api-access-852kd\") pod \"openstack-operator-index-bbk8f\" (UID: \"24d5d93d-500e-4fa4-b540-d4e290c2952e\") " pod="openstack-operators/openstack-operator-index-bbk8f" Mar 13 13:05:59.879634 master-0 kubenswrapper[28149]: I0313 13:05:59.879595 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-852kd\" (UniqueName: \"kubernetes.io/projected/24d5d93d-500e-4fa4-b540-d4e290c2952e-kube-api-access-852kd\") pod \"openstack-operator-index-bbk8f\" (UID: \"24d5d93d-500e-4fa4-b540-d4e290c2952e\") " pod="openstack-operators/openstack-operator-index-bbk8f" Mar 13 13:06:00.042701 master-0 kubenswrapper[28149]: I0313 13:06:00.042214 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bbk8f" Mar 13 13:06:00.506771 master-0 kubenswrapper[28149]: I0313 13:06:00.506701 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bbk8f"] Mar 13 13:06:00.622837 master-0 kubenswrapper[28149]: I0313 13:06:00.622767 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bbk8f" event={"ID":"24d5d93d-500e-4fa4-b540-d4e290c2952e","Type":"ContainerStarted","Data":"7eef8da0c696f2485aef53ccfd5007e5a37f37c858f340dd161548a7966e95e1"} Mar 13 13:06:01.638738 master-0 kubenswrapper[28149]: I0313 13:06:01.638683 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bbk8f" event={"ID":"24d5d93d-500e-4fa4-b540-d4e290c2952e","Type":"ContainerStarted","Data":"25ce31e4fa68ad4e6ecb5a1fbc8e7912e93b32833eba2c243b75e76ff1acecdf"} Mar 13 13:06:01.699309 master-0 kubenswrapper[28149]: I0313 13:06:01.699155 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bbk8f" podStartSLOduration=1.8105479249999998 podStartE2EDuration="2.699125786s" podCreationTimestamp="2026-03-13 13:05:59 +0000 UTC" firstStartedPulling="2026-03-13 13:06:00.530293988 +0000 UTC m=+734.183759147" lastFinishedPulling="2026-03-13 13:06:01.418871849 +0000 UTC m=+735.072337008" observedRunningTime="2026-03-13 13:06:01.696563575 +0000 UTC m=+735.350028734" watchObservedRunningTime="2026-03-13 13:06:01.699125786 +0000 UTC m=+735.352590945" Mar 13 13:06:03.771452 master-0 kubenswrapper[28149]: I0313 13:06:03.771398 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7c4766b9db-6tc2q" podUID="27c9b1b2-ce2d-4837-8b91-3ca46ff394a7" containerName="console" containerID="cri-o://41610067df452029f9607bdf787b606790c6069641790cfd739f45f3a915e0fb" gracePeriod=15 Mar 13 13:06:03.790400 master-0 kubenswrapper[28149]: I0313 13:06:03.790323 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-j7btw" Mar 13 13:06:03.794276 master-0 kubenswrapper[28149]: I0313 13:06:03.793180 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-j7btw" Mar 13 13:06:03.865811 master-0 kubenswrapper[28149]: I0313 13:06:03.865745 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bbk8f"] Mar 13 13:06:03.866066 master-0 kubenswrapper[28149]: I0313 13:06:03.865956 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-bbk8f" podUID="24d5d93d-500e-4fa4-b540-d4e290c2952e" containerName="registry-server" containerID="cri-o://25ce31e4fa68ad4e6ecb5a1fbc8e7912e93b32833eba2c243b75e76ff1acecdf" gracePeriod=2 Mar 13 13:06:04.265663 master-0 kubenswrapper[28149]: I0313 13:06:04.265621 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7c4766b9db-6tc2q_27c9b1b2-ce2d-4837-8b91-3ca46ff394a7/console/0.log" Mar 13 13:06:04.265915 master-0 kubenswrapper[28149]: I0313 13:06:04.265693 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:06:04.356673 master-0 kubenswrapper[28149]: I0313 13:06:04.356623 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-config\") pod \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " Mar 13 13:06:04.356917 master-0 kubenswrapper[28149]: I0313 13:06:04.356734 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf7dd\" (UniqueName: \"kubernetes.io/projected/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-kube-api-access-qf7dd\") pod \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " Mar 13 13:06:04.356917 master-0 kubenswrapper[28149]: I0313 13:06:04.356868 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-serving-cert\") pod \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " Mar 13 13:06:04.356917 master-0 kubenswrapper[28149]: I0313 13:06:04.356910 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-oauth-serving-cert\") pod \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " Mar 13 13:06:04.357791 master-0 kubenswrapper[28149]: I0313 13:06:04.357413 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-config" (OuterVolumeSpecName: "console-config") pod "27c9b1b2-ce2d-4837-8b91-3ca46ff394a7" (UID: "27c9b1b2-ce2d-4837-8b91-3ca46ff394a7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:06:04.357876 master-0 kubenswrapper[28149]: I0313 13:06:04.357736 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "27c9b1b2-ce2d-4837-8b91-3ca46ff394a7" (UID: "27c9b1b2-ce2d-4837-8b91-3ca46ff394a7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:06:04.357876 master-0 kubenswrapper[28149]: I0313 13:06:04.357805 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-service-ca\") pod \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " Mar 13 13:06:04.357976 master-0 kubenswrapper[28149]: I0313 13:06:04.357906 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-trusted-ca-bundle\") pod \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " Mar 13 13:06:04.358223 master-0 kubenswrapper[28149]: I0313 13:06:04.358007 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-oauth-config\") pod \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\" (UID: \"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7\") " Mar 13 13:06:04.358369 master-0 kubenswrapper[28149]: I0313 13:06:04.358297 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-service-ca" (OuterVolumeSpecName: "service-ca") pod "27c9b1b2-ce2d-4837-8b91-3ca46ff394a7" (UID: "27c9b1b2-ce2d-4837-8b91-3ca46ff394a7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:06:04.358533 master-0 kubenswrapper[28149]: I0313 13:06:04.358487 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "27c9b1b2-ce2d-4837-8b91-3ca46ff394a7" (UID: "27c9b1b2-ce2d-4837-8b91-3ca46ff394a7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:06:04.358901 master-0 kubenswrapper[28149]: I0313 13:06:04.358761 28149 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:06:04.358901 master-0 kubenswrapper[28149]: I0313 13:06:04.358799 28149 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 13:06:04.358901 master-0 kubenswrapper[28149]: I0313 13:06:04.358815 28149 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 13:06:04.358901 master-0 kubenswrapper[28149]: I0313 13:06:04.358829 28149 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:06:04.360569 master-0 kubenswrapper[28149]: I0313 13:06:04.360523 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "27c9b1b2-ce2d-4837-8b91-3ca46ff394a7" (UID: "27c9b1b2-ce2d-4837-8b91-3ca46ff394a7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:06:04.361000 master-0 kubenswrapper[28149]: I0313 13:06:04.360968 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bbk8f" Mar 13 13:06:04.361391 master-0 kubenswrapper[28149]: I0313 13:06:04.361345 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "27c9b1b2-ce2d-4837-8b91-3ca46ff394a7" (UID: "27c9b1b2-ce2d-4837-8b91-3ca46ff394a7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:06:04.361582 master-0 kubenswrapper[28149]: I0313 13:06:04.361554 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-kube-api-access-qf7dd" (OuterVolumeSpecName: "kube-api-access-qf7dd") pod "27c9b1b2-ce2d-4837-8b91-3ca46ff394a7" (UID: "27c9b1b2-ce2d-4837-8b91-3ca46ff394a7"). InnerVolumeSpecName "kube-api-access-qf7dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:06:04.462009 master-0 kubenswrapper[28149]: I0313 13:06:04.461842 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-852kd\" (UniqueName: \"kubernetes.io/projected/24d5d93d-500e-4fa4-b540-d4e290c2952e-kube-api-access-852kd\") pod \"24d5d93d-500e-4fa4-b540-d4e290c2952e\" (UID: \"24d5d93d-500e-4fa4-b540-d4e290c2952e\") " Mar 13 13:06:04.462619 master-0 kubenswrapper[28149]: I0313 13:06:04.462599 28149 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:06:04.462680 master-0 kubenswrapper[28149]: I0313 13:06:04.462624 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf7dd\" (UniqueName: \"kubernetes.io/projected/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-kube-api-access-qf7dd\") on node \"master-0\" DevicePath \"\"" Mar 13 13:06:04.462680 master-0 kubenswrapper[28149]: I0313 13:06:04.462665 28149 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 13:06:04.470280 master-0 kubenswrapper[28149]: I0313 13:06:04.466491 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d5d93d-500e-4fa4-b540-d4e290c2952e-kube-api-access-852kd" (OuterVolumeSpecName: "kube-api-access-852kd") pod "24d5d93d-500e-4fa4-b540-d4e290c2952e" (UID: "24d5d93d-500e-4fa4-b540-d4e290c2952e"). InnerVolumeSpecName "kube-api-access-852kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:06:04.476976 master-0 kubenswrapper[28149]: I0313 13:06:04.476895 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2gc9c"] Mar 13 13:06:04.477985 master-0 kubenswrapper[28149]: E0313 13:06:04.477941 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c9b1b2-ce2d-4837-8b91-3ca46ff394a7" containerName="console" Mar 13 13:06:04.477985 master-0 kubenswrapper[28149]: I0313 13:06:04.477986 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c9b1b2-ce2d-4837-8b91-3ca46ff394a7" containerName="console" Mar 13 13:06:04.478100 master-0 kubenswrapper[28149]: E0313 13:06:04.478042 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d5d93d-500e-4fa4-b540-d4e290c2952e" containerName="registry-server" Mar 13 13:06:04.478100 master-0 kubenswrapper[28149]: I0313 13:06:04.478053 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d5d93d-500e-4fa4-b540-d4e290c2952e" containerName="registry-server" Mar 13 13:06:04.478330 master-0 kubenswrapper[28149]: I0313 13:06:04.478294 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d5d93d-500e-4fa4-b540-d4e290c2952e" containerName="registry-server" Mar 13 13:06:04.478384 master-0 kubenswrapper[28149]: I0313 13:06:04.478357 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c9b1b2-ce2d-4837-8b91-3ca46ff394a7" containerName="console" Mar 13 13:06:04.479670 master-0 kubenswrapper[28149]: I0313 13:06:04.479633 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2gc9c" Mar 13 13:06:04.492434 master-0 kubenswrapper[28149]: I0313 13:06:04.492383 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2gc9c"] Mar 13 13:06:04.564382 master-0 kubenswrapper[28149]: I0313 13:06:04.564309 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-852kd\" (UniqueName: \"kubernetes.io/projected/24d5d93d-500e-4fa4-b540-d4e290c2952e-kube-api-access-852kd\") on node \"master-0\" DevicePath \"\"" Mar 13 13:06:04.665501 master-0 kubenswrapper[28149]: I0313 13:06:04.665430 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lm4f\" (UniqueName: \"kubernetes.io/projected/55eaac0e-c2fb-4900-8e7c-c1d245c200cf-kube-api-access-9lm4f\") pod \"openstack-operator-index-2gc9c\" (UID: \"55eaac0e-c2fb-4900-8e7c-c1d245c200cf\") " pod="openstack-operators/openstack-operator-index-2gc9c" Mar 13 13:06:04.668709 master-0 kubenswrapper[28149]: I0313 13:06:04.668665 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7c4766b9db-6tc2q_27c9b1b2-ce2d-4837-8b91-3ca46ff394a7/console/0.log" Mar 13 13:06:04.668844 master-0 kubenswrapper[28149]: I0313 13:06:04.668709 28149 generic.go:334] "Generic (PLEG): container finished" podID="27c9b1b2-ce2d-4837-8b91-3ca46ff394a7" containerID="41610067df452029f9607bdf787b606790c6069641790cfd739f45f3a915e0fb" exitCode=2 Mar 13 13:06:04.668844 master-0 kubenswrapper[28149]: I0313 13:06:04.668766 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c4766b9db-6tc2q" event={"ID":"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7","Type":"ContainerDied","Data":"41610067df452029f9607bdf787b606790c6069641790cfd739f45f3a915e0fb"} Mar 13 13:06:04.668844 master-0 kubenswrapper[28149]: I0313 13:06:04.668794 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c4766b9db-6tc2q" event={"ID":"27c9b1b2-ce2d-4837-8b91-3ca46ff394a7","Type":"ContainerDied","Data":"ebb20900b7d5bedb388ae88ce024c06b2b57064b9fe3b33af9866c50c199dd21"} Mar 13 13:06:04.668844 master-0 kubenswrapper[28149]: I0313 13:06:04.668811 28149 scope.go:117] "RemoveContainer" containerID="41610067df452029f9607bdf787b606790c6069641790cfd739f45f3a915e0fb" Mar 13 13:06:04.669120 master-0 kubenswrapper[28149]: I0313 13:06:04.668907 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c4766b9db-6tc2q" Mar 13 13:06:04.686191 master-0 kubenswrapper[28149]: I0313 13:06:04.678340 28149 generic.go:334] "Generic (PLEG): container finished" podID="24d5d93d-500e-4fa4-b540-d4e290c2952e" containerID="25ce31e4fa68ad4e6ecb5a1fbc8e7912e93b32833eba2c243b75e76ff1acecdf" exitCode=0 Mar 13 13:06:04.686191 master-0 kubenswrapper[28149]: I0313 13:06:04.679804 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bbk8f" Mar 13 13:06:04.686191 master-0 kubenswrapper[28149]: I0313 13:06:04.680121 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bbk8f" event={"ID":"24d5d93d-500e-4fa4-b540-d4e290c2952e","Type":"ContainerDied","Data":"25ce31e4fa68ad4e6ecb5a1fbc8e7912e93b32833eba2c243b75e76ff1acecdf"} Mar 13 13:06:04.686191 master-0 kubenswrapper[28149]: I0313 13:06:04.680256 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-j7btw" Mar 13 13:06:04.686191 master-0 kubenswrapper[28149]: I0313 13:06:04.680613 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bbk8f" event={"ID":"24d5d93d-500e-4fa4-b540-d4e290c2952e","Type":"ContainerDied","Data":"7eef8da0c696f2485aef53ccfd5007e5a37f37c858f340dd161548a7966e95e1"} Mar 13 13:06:04.686191 master-0 kubenswrapper[28149]: I0313 13:06:04.681232 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-j7btw" Mar 13 13:06:04.706290 master-0 kubenswrapper[28149]: I0313 13:06:04.705658 28149 scope.go:117] "RemoveContainer" containerID="41610067df452029f9607bdf787b606790c6069641790cfd739f45f3a915e0fb" Mar 13 13:06:04.708947 master-0 kubenswrapper[28149]: E0313 13:06:04.708861 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41610067df452029f9607bdf787b606790c6069641790cfd739f45f3a915e0fb\": container with ID starting with 41610067df452029f9607bdf787b606790c6069641790cfd739f45f3a915e0fb not found: ID does not exist" containerID="41610067df452029f9607bdf787b606790c6069641790cfd739f45f3a915e0fb" Mar 13 13:06:04.708947 master-0 kubenswrapper[28149]: I0313 13:06:04.708925 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41610067df452029f9607bdf787b606790c6069641790cfd739f45f3a915e0fb"} err="failed to get container status \"41610067df452029f9607bdf787b606790c6069641790cfd739f45f3a915e0fb\": rpc error: code = NotFound desc = could not find container \"41610067df452029f9607bdf787b606790c6069641790cfd739f45f3a915e0fb\": container with ID starting with 41610067df452029f9607bdf787b606790c6069641790cfd739f45f3a915e0fb not found: ID does not exist" Mar 13 13:06:04.709186 master-0 kubenswrapper[28149]: I0313 13:06:04.708964 28149 scope.go:117] "RemoveContainer" containerID="25ce31e4fa68ad4e6ecb5a1fbc8e7912e93b32833eba2c243b75e76ff1acecdf" Mar 13 13:06:04.743639 master-0 kubenswrapper[28149]: I0313 13:06:04.743500 28149 scope.go:117] "RemoveContainer" containerID="25ce31e4fa68ad4e6ecb5a1fbc8e7912e93b32833eba2c243b75e76ff1acecdf" Mar 13 13:06:04.755014 master-0 kubenswrapper[28149]: E0313 13:06:04.744003 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25ce31e4fa68ad4e6ecb5a1fbc8e7912e93b32833eba2c243b75e76ff1acecdf\": container with ID starting with 25ce31e4fa68ad4e6ecb5a1fbc8e7912e93b32833eba2c243b75e76ff1acecdf not found: ID does not exist" containerID="25ce31e4fa68ad4e6ecb5a1fbc8e7912e93b32833eba2c243b75e76ff1acecdf" Mar 13 13:06:04.755014 master-0 kubenswrapper[28149]: I0313 13:06:04.746615 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25ce31e4fa68ad4e6ecb5a1fbc8e7912e93b32833eba2c243b75e76ff1acecdf"} err="failed to get container status \"25ce31e4fa68ad4e6ecb5a1fbc8e7912e93b32833eba2c243b75e76ff1acecdf\": rpc error: code = NotFound desc = could not find container \"25ce31e4fa68ad4e6ecb5a1fbc8e7912e93b32833eba2c243b75e76ff1acecdf\": container with ID starting with 25ce31e4fa68ad4e6ecb5a1fbc8e7912e93b32833eba2c243b75e76ff1acecdf not found: ID does not exist" Mar 13 13:06:04.761243 master-0 kubenswrapper[28149]: I0313 13:06:04.761187 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bbk8f"] Mar 13 13:06:04.774169 master-0 kubenswrapper[28149]: I0313 13:06:04.770911 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-bbk8f"] Mar 13 13:06:04.776166 master-0 kubenswrapper[28149]: I0313 13:06:04.775798 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lm4f\" (UniqueName: \"kubernetes.io/projected/55eaac0e-c2fb-4900-8e7c-c1d245c200cf-kube-api-access-9lm4f\") pod \"openstack-operator-index-2gc9c\" (UID: \"55eaac0e-c2fb-4900-8e7c-c1d245c200cf\") " pod="openstack-operators/openstack-operator-index-2gc9c" Mar 13 13:06:04.803176 master-0 kubenswrapper[28149]: I0313 13:06:04.801764 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lm4f\" (UniqueName: \"kubernetes.io/projected/55eaac0e-c2fb-4900-8e7c-c1d245c200cf-kube-api-access-9lm4f\") pod \"openstack-operator-index-2gc9c\" (UID: \"55eaac0e-c2fb-4900-8e7c-c1d245c200cf\") " pod="openstack-operators/openstack-operator-index-2gc9c" Mar 13 13:06:04.803176 master-0 kubenswrapper[28149]: I0313 13:06:04.802253 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2gc9c" Mar 13 13:06:04.825278 master-0 kubenswrapper[28149]: I0313 13:06:04.817252 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7c4766b9db-6tc2q"] Mar 13 13:06:04.861748 master-0 kubenswrapper[28149]: I0313 13:06:04.861645 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7c4766b9db-6tc2q"] Mar 13 13:06:05.566806 master-0 kubenswrapper[28149]: I0313 13:06:05.507767 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2gc9c"] Mar 13 13:06:05.570807 master-0 kubenswrapper[28149]: W0313 13:06:05.570747 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55eaac0e_c2fb_4900_8e7c_c1d245c200cf.slice/crio-1298a439a80abc8259902c6270ef9cab2868633d34e943848f1f9db0a2498baa WatchSource:0}: Error finding container 1298a439a80abc8259902c6270ef9cab2868633d34e943848f1f9db0a2498baa: Status 404 returned error can't find the container with id 1298a439a80abc8259902c6270ef9cab2868633d34e943848f1f9db0a2498baa Mar 13 13:06:05.691686 master-0 kubenswrapper[28149]: I0313 13:06:05.691626 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2gc9c" event={"ID":"55eaac0e-c2fb-4900-8e7c-c1d245c200cf","Type":"ContainerStarted","Data":"1298a439a80abc8259902c6270ef9cab2868633d34e943848f1f9db0a2498baa"} Mar 13 13:06:06.710923 master-0 kubenswrapper[28149]: I0313 13:06:06.710830 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d5d93d-500e-4fa4-b540-d4e290c2952e" path="/var/lib/kubelet/pods/24d5d93d-500e-4fa4-b540-d4e290c2952e/volumes" Mar 13 13:06:06.712295 master-0 kubenswrapper[28149]: I0313 13:06:06.712220 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27c9b1b2-ce2d-4837-8b91-3ca46ff394a7" path="/var/lib/kubelet/pods/27c9b1b2-ce2d-4837-8b91-3ca46ff394a7/volumes" Mar 13 13:06:06.715170 master-0 kubenswrapper[28149]: I0313 13:06:06.714435 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2gc9c" event={"ID":"55eaac0e-c2fb-4900-8e7c-c1d245c200cf","Type":"ContainerStarted","Data":"ca31b6e6babbd0a3921fa4d2788e04280499b561c0d0dd45f7041e9eae16d6f9"} Mar 13 13:06:06.747214 master-0 kubenswrapper[28149]: I0313 13:06:06.747105 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2gc9c" podStartSLOduration=2.266164907 podStartE2EDuration="2.747081004s" podCreationTimestamp="2026-03-13 13:06:04 +0000 UTC" firstStartedPulling="2026-03-13 13:06:05.576738734 +0000 UTC m=+739.230203893" lastFinishedPulling="2026-03-13 13:06:06.057654831 +0000 UTC m=+739.711119990" observedRunningTime="2026-03-13 13:06:06.736045859 +0000 UTC m=+740.389511018" watchObservedRunningTime="2026-03-13 13:06:06.747081004 +0000 UTC m=+740.400546163" Mar 13 13:06:14.803029 master-0 kubenswrapper[28149]: I0313 13:06:14.802968 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-2gc9c" Mar 13 13:06:14.803988 master-0 kubenswrapper[28149]: I0313 13:06:14.803967 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-2gc9c" Mar 13 13:06:14.843530 master-0 kubenswrapper[28149]: I0313 13:06:14.843483 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-2gc9c" Mar 13 13:06:15.875070 master-0 kubenswrapper[28149]: I0313 13:06:15.875012 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-2gc9c" Mar 13 13:06:16.523794 master-0 kubenswrapper[28149]: I0313 13:06:16.523728 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb"] Mar 13 13:06:16.525653 master-0 kubenswrapper[28149]: I0313 13:06:16.525622 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" Mar 13 13:06:16.542787 master-0 kubenswrapper[28149]: I0313 13:06:16.541336 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb"] Mar 13 13:06:16.666066 master-0 kubenswrapper[28149]: I0313 13:06:16.665952 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b474bd7-8360-4da9-a641-f8f936701512-util\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb\" (UID: \"1b474bd7-8360-4da9-a641-f8f936701512\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" Mar 13 13:06:16.666388 master-0 kubenswrapper[28149]: I0313 13:06:16.666333 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b474bd7-8360-4da9-a641-f8f936701512-bundle\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb\" (UID: \"1b474bd7-8360-4da9-a641-f8f936701512\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" Mar 13 13:06:16.666513 master-0 kubenswrapper[28149]: I0313 13:06:16.666483 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwkxr\" (UniqueName: \"kubernetes.io/projected/1b474bd7-8360-4da9-a641-f8f936701512-kube-api-access-gwkxr\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb\" (UID: \"1b474bd7-8360-4da9-a641-f8f936701512\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" Mar 13 13:06:16.767768 master-0 kubenswrapper[28149]: I0313 13:06:16.767707 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b474bd7-8360-4da9-a641-f8f936701512-bundle\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb\" (UID: \"1b474bd7-8360-4da9-a641-f8f936701512\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" Mar 13 13:06:16.768002 master-0 kubenswrapper[28149]: I0313 13:06:16.767785 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwkxr\" (UniqueName: \"kubernetes.io/projected/1b474bd7-8360-4da9-a641-f8f936701512-kube-api-access-gwkxr\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb\" (UID: \"1b474bd7-8360-4da9-a641-f8f936701512\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" Mar 13 13:06:16.768002 master-0 kubenswrapper[28149]: I0313 13:06:16.767901 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b474bd7-8360-4da9-a641-f8f936701512-util\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb\" (UID: \"1b474bd7-8360-4da9-a641-f8f936701512\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" Mar 13 13:06:16.768290 master-0 kubenswrapper[28149]: I0313 13:06:16.768245 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b474bd7-8360-4da9-a641-f8f936701512-bundle\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb\" (UID: \"1b474bd7-8360-4da9-a641-f8f936701512\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" Mar 13 13:06:16.768455 master-0 kubenswrapper[28149]: I0313 13:06:16.768407 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b474bd7-8360-4da9-a641-f8f936701512-util\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb\" (UID: \"1b474bd7-8360-4da9-a641-f8f936701512\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" Mar 13 13:06:16.783719 master-0 kubenswrapper[28149]: I0313 13:06:16.783624 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwkxr\" (UniqueName: \"kubernetes.io/projected/1b474bd7-8360-4da9-a641-f8f936701512-kube-api-access-gwkxr\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb\" (UID: \"1b474bd7-8360-4da9-a641-f8f936701512\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" Mar 13 13:06:16.864433 master-0 kubenswrapper[28149]: I0313 13:06:16.864224 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" Mar 13 13:06:17.301853 master-0 kubenswrapper[28149]: I0313 13:06:17.301750 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb"] Mar 13 13:06:17.897393 master-0 kubenswrapper[28149]: I0313 13:06:17.894688 28149 generic.go:334] "Generic (PLEG): container finished" podID="1b474bd7-8360-4da9-a641-f8f936701512" containerID="d6a12e130bfe530f65cc81c3a4c1e42b38cce100ec013572edcdd20625cf6ed3" exitCode=0 Mar 13 13:06:17.897393 master-0 kubenswrapper[28149]: I0313 13:06:17.894784 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" event={"ID":"1b474bd7-8360-4da9-a641-f8f936701512","Type":"ContainerDied","Data":"d6a12e130bfe530f65cc81c3a4c1e42b38cce100ec013572edcdd20625cf6ed3"} Mar 13 13:06:17.897393 master-0 kubenswrapper[28149]: I0313 13:06:17.894822 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" event={"ID":"1b474bd7-8360-4da9-a641-f8f936701512","Type":"ContainerStarted","Data":"e85fed72f107ef750465be5b65370c1821f54cfee120a63a151ffc495b362a4a"} Mar 13 13:06:19.914077 master-0 kubenswrapper[28149]: I0313 13:06:19.913999 28149 generic.go:334] "Generic (PLEG): container finished" podID="1b474bd7-8360-4da9-a641-f8f936701512" containerID="f1b30308ef10b2703871c31ff12a91faa6a7d5930e82914c0208fa8b731f7c4f" exitCode=0 Mar 13 13:06:19.914077 master-0 kubenswrapper[28149]: I0313 13:06:19.914071 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" event={"ID":"1b474bd7-8360-4da9-a641-f8f936701512","Type":"ContainerDied","Data":"f1b30308ef10b2703871c31ff12a91faa6a7d5930e82914c0208fa8b731f7c4f"} Mar 13 13:06:20.927680 master-0 kubenswrapper[28149]: I0313 13:06:20.927599 28149 generic.go:334] "Generic (PLEG): container finished" podID="1b474bd7-8360-4da9-a641-f8f936701512" containerID="70c1067bef478cd0a59532c7ef42db4d6873bc2aa14120683f1e8b5757ddcf47" exitCode=0 Mar 13 13:06:20.927680 master-0 kubenswrapper[28149]: I0313 13:06:20.927657 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" event={"ID":"1b474bd7-8360-4da9-a641-f8f936701512","Type":"ContainerDied","Data":"70c1067bef478cd0a59532c7ef42db4d6873bc2aa14120683f1e8b5757ddcf47"} Mar 13 13:06:22.297286 master-0 kubenswrapper[28149]: I0313 13:06:22.297232 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" Mar 13 13:06:22.474981 master-0 kubenswrapper[28149]: I0313 13:06:22.474910 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b474bd7-8360-4da9-a641-f8f936701512-bundle\") pod \"1b474bd7-8360-4da9-a641-f8f936701512\" (UID: \"1b474bd7-8360-4da9-a641-f8f936701512\") " Mar 13 13:06:22.475313 master-0 kubenswrapper[28149]: I0313 13:06:22.475051 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b474bd7-8360-4da9-a641-f8f936701512-util\") pod \"1b474bd7-8360-4da9-a641-f8f936701512\" (UID: \"1b474bd7-8360-4da9-a641-f8f936701512\") " Mar 13 13:06:22.475313 master-0 kubenswrapper[28149]: I0313 13:06:22.475082 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwkxr\" (UniqueName: \"kubernetes.io/projected/1b474bd7-8360-4da9-a641-f8f936701512-kube-api-access-gwkxr\") pod \"1b474bd7-8360-4da9-a641-f8f936701512\" (UID: \"1b474bd7-8360-4da9-a641-f8f936701512\") " Mar 13 13:06:22.476562 master-0 kubenswrapper[28149]: I0313 13:06:22.476453 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b474bd7-8360-4da9-a641-f8f936701512-bundle" (OuterVolumeSpecName: "bundle") pod "1b474bd7-8360-4da9-a641-f8f936701512" (UID: "1b474bd7-8360-4da9-a641-f8f936701512"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:06:22.479116 master-0 kubenswrapper[28149]: I0313 13:06:22.479023 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b474bd7-8360-4da9-a641-f8f936701512-kube-api-access-gwkxr" (OuterVolumeSpecName: "kube-api-access-gwkxr") pod "1b474bd7-8360-4da9-a641-f8f936701512" (UID: "1b474bd7-8360-4da9-a641-f8f936701512"). InnerVolumeSpecName "kube-api-access-gwkxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:06:22.486840 master-0 kubenswrapper[28149]: I0313 13:06:22.486682 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b474bd7-8360-4da9-a641-f8f936701512-util" (OuterVolumeSpecName: "util") pod "1b474bd7-8360-4da9-a641-f8f936701512" (UID: "1b474bd7-8360-4da9-a641-f8f936701512"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:06:22.577445 master-0 kubenswrapper[28149]: I0313 13:06:22.577376 28149 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b474bd7-8360-4da9-a641-f8f936701512-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:06:22.577445 master-0 kubenswrapper[28149]: I0313 13:06:22.577428 28149 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b474bd7-8360-4da9-a641-f8f936701512-util\") on node \"master-0\" DevicePath \"\"" Mar 13 13:06:22.577445 master-0 kubenswrapper[28149]: I0313 13:06:22.577444 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwkxr\" (UniqueName: \"kubernetes.io/projected/1b474bd7-8360-4da9-a641-f8f936701512-kube-api-access-gwkxr\") on node \"master-0\" DevicePath \"\"" Mar 13 13:06:22.950587 master-0 kubenswrapper[28149]: I0313 13:06:22.950523 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" event={"ID":"1b474bd7-8360-4da9-a641-f8f936701512","Type":"ContainerDied","Data":"e85fed72f107ef750465be5b65370c1821f54cfee120a63a151ffc495b362a4a"} Mar 13 13:06:22.950587 master-0 kubenswrapper[28149]: I0313 13:06:22.950575 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e85fed72f107ef750465be5b65370c1821f54cfee120a63a151ffc495b362a4a" Mar 13 13:06:22.950974 master-0 kubenswrapper[28149]: I0313 13:06:22.950635 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qwzcb" Mar 13 13:06:30.357160 master-0 kubenswrapper[28149]: I0313 13:06:30.354798 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-65b9994cf8-s5vmc"] Mar 13 13:06:30.357160 master-0 kubenswrapper[28149]: E0313 13:06:30.355202 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b474bd7-8360-4da9-a641-f8f936701512" containerName="pull" Mar 13 13:06:30.357160 master-0 kubenswrapper[28149]: I0313 13:06:30.355215 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b474bd7-8360-4da9-a641-f8f936701512" containerName="pull" Mar 13 13:06:30.357160 master-0 kubenswrapper[28149]: E0313 13:06:30.355248 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b474bd7-8360-4da9-a641-f8f936701512" containerName="extract" Mar 13 13:06:30.357160 master-0 kubenswrapper[28149]: I0313 13:06:30.355255 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b474bd7-8360-4da9-a641-f8f936701512" containerName="extract" Mar 13 13:06:30.357160 master-0 kubenswrapper[28149]: E0313 13:06:30.355274 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b474bd7-8360-4da9-a641-f8f936701512" containerName="util" Mar 13 13:06:30.357160 master-0 kubenswrapper[28149]: I0313 13:06:30.355281 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b474bd7-8360-4da9-a641-f8f936701512" containerName="util" Mar 13 13:06:30.357160 master-0 kubenswrapper[28149]: I0313 13:06:30.355485 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b474bd7-8360-4da9-a641-f8f936701512" containerName="extract" Mar 13 13:06:30.357160 master-0 kubenswrapper[28149]: I0313 13:06:30.356068 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-s5vmc" Mar 13 13:06:30.391129 master-0 kubenswrapper[28149]: I0313 13:06:30.391076 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctgb9\" (UniqueName: \"kubernetes.io/projected/5600c014-c198-4a66-aad1-1bc9cfa71bc0-kube-api-access-ctgb9\") pod \"openstack-operator-controller-init-65b9994cf8-s5vmc\" (UID: \"5600c014-c198-4a66-aad1-1bc9cfa71bc0\") " pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-s5vmc" Mar 13 13:06:30.396419 master-0 kubenswrapper[28149]: I0313 13:06:30.396364 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65b9994cf8-s5vmc"] Mar 13 13:06:30.493021 master-0 kubenswrapper[28149]: I0313 13:06:30.492974 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctgb9\" (UniqueName: \"kubernetes.io/projected/5600c014-c198-4a66-aad1-1bc9cfa71bc0-kube-api-access-ctgb9\") pod \"openstack-operator-controller-init-65b9994cf8-s5vmc\" (UID: \"5600c014-c198-4a66-aad1-1bc9cfa71bc0\") " pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-s5vmc" Mar 13 13:06:30.536878 master-0 kubenswrapper[28149]: I0313 13:06:30.536311 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctgb9\" (UniqueName: \"kubernetes.io/projected/5600c014-c198-4a66-aad1-1bc9cfa71bc0-kube-api-access-ctgb9\") pod \"openstack-operator-controller-init-65b9994cf8-s5vmc\" (UID: \"5600c014-c198-4a66-aad1-1bc9cfa71bc0\") " pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-s5vmc" Mar 13 13:06:30.764846 master-0 kubenswrapper[28149]: I0313 13:06:30.764718 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-s5vmc" Mar 13 13:06:31.339370 master-0 kubenswrapper[28149]: I0313 13:06:31.337001 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65b9994cf8-s5vmc"] Mar 13 13:06:31.351156 master-0 kubenswrapper[28149]: W0313 13:06:31.348266 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5600c014_c198_4a66_aad1_1bc9cfa71bc0.slice/crio-b933114ddfe304b7b8e9a9b70ecac410a51c4fc73fdf9354404e97b2ba343df8 WatchSource:0}: Error finding container b933114ddfe304b7b8e9a9b70ecac410a51c4fc73fdf9354404e97b2ba343df8: Status 404 returned error can't find the container with id b933114ddfe304b7b8e9a9b70ecac410a51c4fc73fdf9354404e97b2ba343df8 Mar 13 13:06:32.305738 master-0 kubenswrapper[28149]: I0313 13:06:32.305679 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-s5vmc" event={"ID":"5600c014-c198-4a66-aad1-1bc9cfa71bc0","Type":"ContainerStarted","Data":"b933114ddfe304b7b8e9a9b70ecac410a51c4fc73fdf9354404e97b2ba343df8"} Mar 13 13:06:37.366511 master-0 kubenswrapper[28149]: I0313 13:06:37.366457 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-s5vmc" event={"ID":"5600c014-c198-4a66-aad1-1bc9cfa71bc0","Type":"ContainerStarted","Data":"bd65549f47378ce9bef94f5205504b3650ff65f9d28b7863c023a39b883d13f1"} Mar 13 13:06:37.367159 master-0 kubenswrapper[28149]: I0313 13:06:37.366642 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-s5vmc" Mar 13 13:06:50.766928 master-0 kubenswrapper[28149]: I0313 13:06:50.766870 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-s5vmc" Mar 13 13:06:50.814630 master-0 kubenswrapper[28149]: I0313 13:06:50.814538 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-s5vmc" podStartSLOduration=15.798880921 podStartE2EDuration="20.814509956s" podCreationTimestamp="2026-03-13 13:06:30 +0000 UTC" firstStartedPulling="2026-03-13 13:06:31.34987028 +0000 UTC m=+765.003335439" lastFinishedPulling="2026-03-13 13:06:36.365499315 +0000 UTC m=+770.018964474" observedRunningTime="2026-03-13 13:06:37.415467002 +0000 UTC m=+771.068932171" watchObservedRunningTime="2026-03-13 13:06:50.814509956 +0000 UTC m=+784.467975115" Mar 13 13:07:11.662684 master-0 kubenswrapper[28149]: I0313 13:07:11.659802 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-bql4k"] Mar 13 13:07:11.662684 master-0 kubenswrapper[28149]: I0313 13:07:11.661060 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-bql4k" Mar 13 13:07:11.703580 master-0 kubenswrapper[28149]: I0313 13:07:11.703522 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-bql4k"] Mar 13 13:07:11.725997 master-0 kubenswrapper[28149]: I0313 13:07:11.725938 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-b6gkp"] Mar 13 13:07:11.727298 master-0 kubenswrapper[28149]: I0313 13:07:11.727275 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-b6gkp" Mar 13 13:07:11.741966 master-0 kubenswrapper[28149]: I0313 13:07:11.741924 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-49xpx"] Mar 13 13:07:11.743212 master-0 kubenswrapper[28149]: I0313 13:07:11.743183 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-49xpx" Mar 13 13:07:11.795973 master-0 kubenswrapper[28149]: I0313 13:07:11.792342 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfcgp\" (UniqueName: \"kubernetes.io/projected/7cb35574-9b29-4dd8-8ec5-a37816092d10-kube-api-access-bfcgp\") pod \"barbican-operator-controller-manager-677bd678f7-bql4k\" (UID: \"7cb35574-9b29-4dd8-8ec5-a37816092d10\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-bql4k" Mar 13 13:07:11.798044 master-0 kubenswrapper[28149]: I0313 13:07:11.796254 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-b6gkp"] Mar 13 13:07:11.836016 master-0 kubenswrapper[28149]: I0313 13:07:11.835956 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-49xpx"] Mar 13 13:07:11.847176 master-0 kubenswrapper[28149]: I0313 13:07:11.843455 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-pdfdc"] Mar 13 13:07:11.847176 master-0 kubenswrapper[28149]: I0313 13:07:11.845052 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-pdfdc" Mar 13 13:07:11.896247 master-0 kubenswrapper[28149]: I0313 13:07:11.893987 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vspb6\" (UniqueName: \"kubernetes.io/projected/24d18dc3-f6a0-4b38-b027-1f92534d6201-kube-api-access-vspb6\") pod \"cinder-operator-controller-manager-984cd4dcf-b6gkp\" (UID: \"24d18dc3-f6a0-4b38-b027-1f92534d6201\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-b6gkp" Mar 13 13:07:11.896247 master-0 kubenswrapper[28149]: I0313 13:07:11.894048 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q647k\" (UniqueName: \"kubernetes.io/projected/94a2c09f-a0f4-4ab0-8bae-116dc938de70-kube-api-access-q647k\") pod \"designate-operator-controller-manager-66d56f6ff4-49xpx\" (UID: \"94a2c09f-a0f4-4ab0-8bae-116dc938de70\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-49xpx" Mar 13 13:07:11.896247 master-0 kubenswrapper[28149]: I0313 13:07:11.894076 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfcgp\" (UniqueName: \"kubernetes.io/projected/7cb35574-9b29-4dd8-8ec5-a37816092d10-kube-api-access-bfcgp\") pod \"barbican-operator-controller-manager-677bd678f7-bql4k\" (UID: \"7cb35574-9b29-4dd8-8ec5-a37816092d10\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-bql4k" Mar 13 13:07:11.896247 master-0 kubenswrapper[28149]: I0313 13:07:11.864716 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-pdfdc"] Mar 13 13:07:11.896247 master-0 kubenswrapper[28149]: I0313 13:07:11.894997 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-pwqm5"] Mar 13 13:07:11.916812 master-0 kubenswrapper[28149]: I0313 13:07:11.916266 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-pwqm5" Mar 13 13:07:11.978278 master-0 kubenswrapper[28149]: I0313 13:07:11.972309 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-pwqm5"] Mar 13 13:07:11.978278 master-0 kubenswrapper[28149]: I0313 13:07:11.974867 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfcgp\" (UniqueName: \"kubernetes.io/projected/7cb35574-9b29-4dd8-8ec5-a37816092d10-kube-api-access-bfcgp\") pod \"barbican-operator-controller-manager-677bd678f7-bql4k\" (UID: \"7cb35574-9b29-4dd8-8ec5-a37816092d10\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-bql4k" Mar 13 13:07:12.013920 master-0 kubenswrapper[28149]: I0313 13:07:11.997516 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vspb6\" (UniqueName: \"kubernetes.io/projected/24d18dc3-f6a0-4b38-b027-1f92534d6201-kube-api-access-vspb6\") pod \"cinder-operator-controller-manager-984cd4dcf-b6gkp\" (UID: \"24d18dc3-f6a0-4b38-b027-1f92534d6201\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-b6gkp" Mar 13 13:07:12.013920 master-0 kubenswrapper[28149]: I0313 13:07:11.997618 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q647k\" (UniqueName: \"kubernetes.io/projected/94a2c09f-a0f4-4ab0-8bae-116dc938de70-kube-api-access-q647k\") pod \"designate-operator-controller-manager-66d56f6ff4-49xpx\" (UID: \"94a2c09f-a0f4-4ab0-8bae-116dc938de70\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-49xpx" Mar 13 13:07:12.013920 master-0 kubenswrapper[28149]: I0313 13:07:11.997979 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gffls\" (UniqueName: \"kubernetes.io/projected/e853709f-9f1c-4e4b-b43d-3d6f8685b563-kube-api-access-gffls\") pod \"glance-operator-controller-manager-5964f64c48-pdfdc\" (UID: \"e853709f-9f1c-4e4b-b43d-3d6f8685b563\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-pdfdc" Mar 13 13:07:12.013920 master-0 kubenswrapper[28149]: I0313 13:07:12.012888 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-bql4k" Mar 13 13:07:12.028181 master-0 kubenswrapper[28149]: I0313 13:07:12.021137 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vspb6\" (UniqueName: \"kubernetes.io/projected/24d18dc3-f6a0-4b38-b027-1f92534d6201-kube-api-access-vspb6\") pod \"cinder-operator-controller-manager-984cd4dcf-b6gkp\" (UID: \"24d18dc3-f6a0-4b38-b027-1f92534d6201\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-b6gkp" Mar 13 13:07:12.072076 master-0 kubenswrapper[28149]: I0313 13:07:12.068573 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-rm4bg"] Mar 13 13:07:12.072076 master-0 kubenswrapper[28149]: I0313 13:07:12.069861 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-rm4bg" Mar 13 13:07:12.072076 master-0 kubenswrapper[28149]: I0313 13:07:12.071699 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q647k\" (UniqueName: \"kubernetes.io/projected/94a2c09f-a0f4-4ab0-8bae-116dc938de70-kube-api-access-q647k\") pod \"designate-operator-controller-manager-66d56f6ff4-49xpx\" (UID: \"94a2c09f-a0f4-4ab0-8bae-116dc938de70\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-49xpx" Mar 13 13:07:12.091937 master-0 kubenswrapper[28149]: I0313 13:07:12.088326 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-rm4bg"] Mar 13 13:07:12.091937 master-0 kubenswrapper[28149]: I0313 13:07:12.091250 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-b6gkp" Mar 13 13:07:12.101770 master-0 kubenswrapper[28149]: I0313 13:07:12.101702 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gffls\" (UniqueName: \"kubernetes.io/projected/e853709f-9f1c-4e4b-b43d-3d6f8685b563-kube-api-access-gffls\") pod \"glance-operator-controller-manager-5964f64c48-pdfdc\" (UID: \"e853709f-9f1c-4e4b-b43d-3d6f8685b563\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-pdfdc" Mar 13 13:07:12.102304 master-0 kubenswrapper[28149]: I0313 13:07:12.102061 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr6h2\" (UniqueName: \"kubernetes.io/projected/a2e8c53d-29c4-4a63-b701-8103253c197a-kube-api-access-cr6h2\") pod \"heat-operator-controller-manager-77b6666d85-pwqm5\" (UID: \"a2e8c53d-29c4-4a63-b701-8103253c197a\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-pwqm5" Mar 13 13:07:12.113549 master-0 kubenswrapper[28149]: I0313 13:07:12.113077 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-49xpx" Mar 13 13:07:12.171257 master-0 kubenswrapper[28149]: I0313 13:07:12.164787 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gffls\" (UniqueName: \"kubernetes.io/projected/e853709f-9f1c-4e4b-b43d-3d6f8685b563-kube-api-access-gffls\") pod \"glance-operator-controller-manager-5964f64c48-pdfdc\" (UID: \"e853709f-9f1c-4e4b-b43d-3d6f8685b563\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-pdfdc" Mar 13 13:07:12.209980 master-0 kubenswrapper[28149]: I0313 13:07:12.209901 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr6h2\" (UniqueName: \"kubernetes.io/projected/a2e8c53d-29c4-4a63-b701-8103253c197a-kube-api-access-cr6h2\") pod \"heat-operator-controller-manager-77b6666d85-pwqm5\" (UID: \"a2e8c53d-29c4-4a63-b701-8103253c197a\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-pwqm5" Mar 13 13:07:12.210322 master-0 kubenswrapper[28149]: I0313 13:07:12.210171 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wckh\" (UniqueName: \"kubernetes.io/projected/70ec85c2-6c2a-44c1-b172-c851765912b5-kube-api-access-9wckh\") pod \"horizon-operator-controller-manager-6d9d6b584d-rm4bg\" (UID: \"70ec85c2-6c2a-44c1-b172-c851765912b5\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-rm4bg" Mar 13 13:07:12.215434 master-0 kubenswrapper[28149]: I0313 13:07:12.215383 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n"] Mar 13 13:07:12.217056 master-0 kubenswrapper[28149]: I0313 13:07:12.217014 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:07:12.241012 master-0 kubenswrapper[28149]: I0313 13:07:12.240964 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 13 13:07:12.241271 master-0 kubenswrapper[28149]: I0313 13:07:12.241182 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-pdfdc" Mar 13 13:07:12.246297 master-0 kubenswrapper[28149]: I0313 13:07:12.246230 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr6h2\" (UniqueName: \"kubernetes.io/projected/a2e8c53d-29c4-4a63-b701-8103253c197a-kube-api-access-cr6h2\") pod \"heat-operator-controller-manager-77b6666d85-pwqm5\" (UID: \"a2e8c53d-29c4-4a63-b701-8103253c197a\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-pwqm5" Mar 13 13:07:12.248185 master-0 kubenswrapper[28149]: I0313 13:07:12.248149 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-4qdjn"] Mar 13 13:07:12.250608 master-0 kubenswrapper[28149]: I0313 13:07:12.249401 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-4qdjn" Mar 13 13:07:12.311420 master-0 kubenswrapper[28149]: I0313 13:07:12.311363 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n"] Mar 13 13:07:12.315643 master-0 kubenswrapper[28149]: I0313 13:07:12.314145 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wckh\" (UniqueName: \"kubernetes.io/projected/70ec85c2-6c2a-44c1-b172-c851765912b5-kube-api-access-9wckh\") pod \"horizon-operator-controller-manager-6d9d6b584d-rm4bg\" (UID: \"70ec85c2-6c2a-44c1-b172-c851765912b5\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-rm4bg" Mar 13 13:07:12.316696 master-0 kubenswrapper[28149]: I0313 13:07:12.316676 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-pwqm5" Mar 13 13:07:12.360506 master-0 kubenswrapper[28149]: I0313 13:07:12.357635 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wckh\" (UniqueName: \"kubernetes.io/projected/70ec85c2-6c2a-44c1-b172-c851765912b5-kube-api-access-9wckh\") pod \"horizon-operator-controller-manager-6d9d6b584d-rm4bg\" (UID: \"70ec85c2-6c2a-44c1-b172-c851765912b5\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-rm4bg" Mar 13 13:07:12.388403 master-0 kubenswrapper[28149]: I0313 13:07:12.365840 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-4qdjn"] Mar 13 13:07:12.402955 master-0 kubenswrapper[28149]: I0313 13:07:12.400613 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-thm6q"] Mar 13 13:07:12.402955 master-0 kubenswrapper[28149]: I0313 13:07:12.402009 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-thm6q" Mar 13 13:07:12.433711 master-0 kubenswrapper[28149]: I0313 13:07:12.419997 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2vsw\" (UniqueName: \"kubernetes.io/projected/57e83807-c598-4f45-b92a-e017a07b6997-kube-api-access-z2vsw\") pod \"infra-operator-controller-manager-b8c8d7cc8-q7d6n\" (UID: \"57e83807-c598-4f45-b92a-e017a07b6997\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:07:12.433711 master-0 kubenswrapper[28149]: I0313 13:07:12.420042 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-q7d6n\" (UID: \"57e83807-c598-4f45-b92a-e017a07b6997\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:07:12.433711 master-0 kubenswrapper[28149]: I0313 13:07:12.420108 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd6kp\" (UniqueName: \"kubernetes.io/projected/bee4fa71-7893-41d2-8512-5d26c6da9913-kube-api-access-zd6kp\") pod \"keystone-operator-controller-manager-684f77d66d-thm6q\" (UID: \"bee4fa71-7893-41d2-8512-5d26c6da9913\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-thm6q" Mar 13 13:07:12.433711 master-0 kubenswrapper[28149]: I0313 13:07:12.420145 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wbhb\" (UniqueName: \"kubernetes.io/projected/5a9ed8da-031e-4009-b7aa-c1dd970911c6-kube-api-access-7wbhb\") pod \"ironic-operator-controller-manager-6bbb499bbc-4qdjn\" (UID: \"5a9ed8da-031e-4009-b7aa-c1dd970911c6\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-4qdjn" Mar 13 13:07:12.433711 master-0 kubenswrapper[28149]: I0313 13:07:12.420347 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-rm4bg" Mar 13 13:07:12.448038 master-0 kubenswrapper[28149]: I0313 13:07:12.445952 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-thm6q"] Mar 13 13:07:12.462311 master-0 kubenswrapper[28149]: I0313 13:07:12.462256 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-cjfg7"] Mar 13 13:07:12.465338 master-0 kubenswrapper[28149]: I0313 13:07:12.463530 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-cjfg7" Mar 13 13:07:12.484413 master-0 kubenswrapper[28149]: I0313 13:07:12.477472 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-g962k"] Mar 13 13:07:12.484413 master-0 kubenswrapper[28149]: I0313 13:07:12.478763 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-g962k" Mar 13 13:07:12.484413 master-0 kubenswrapper[28149]: I0313 13:07:12.482591 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-cjfg7"] Mar 13 13:07:12.503658 master-0 kubenswrapper[28149]: I0313 13:07:12.498655 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-g962k"] Mar 13 13:07:12.509183 master-0 kubenswrapper[28149]: I0313 13:07:12.509028 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-5z8g2"] Mar 13 13:07:12.545300 master-0 kubenswrapper[28149]: I0313 13:07:12.540236 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-5z8g2"] Mar 13 13:07:12.545300 master-0 kubenswrapper[28149]: I0313 13:07:12.540269 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-4ns7k"] Mar 13 13:07:12.545300 master-0 kubenswrapper[28149]: I0313 13:07:12.542883 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-5z8g2" Mar 13 13:07:12.545300 master-0 kubenswrapper[28149]: I0313 13:07:12.543414 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-4ns7k" Mar 13 13:07:12.561999 master-0 kubenswrapper[28149]: I0313 13:07:12.561381 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2vsw\" (UniqueName: \"kubernetes.io/projected/57e83807-c598-4f45-b92a-e017a07b6997-kube-api-access-z2vsw\") pod \"infra-operator-controller-manager-b8c8d7cc8-q7d6n\" (UID: \"57e83807-c598-4f45-b92a-e017a07b6997\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:07:12.561999 master-0 kubenswrapper[28149]: I0313 13:07:12.561482 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-q7d6n\" (UID: \"57e83807-c598-4f45-b92a-e017a07b6997\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:07:12.561999 master-0 kubenswrapper[28149]: E0313 13:07:12.561775 28149 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 13:07:12.561999 master-0 kubenswrapper[28149]: E0313 13:07:12.561883 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert podName:57e83807-c598-4f45-b92a-e017a07b6997 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:13.061847812 +0000 UTC m=+806.715312971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert") pod "infra-operator-controller-manager-b8c8d7cc8-q7d6n" (UID: "57e83807-c598-4f45-b92a-e017a07b6997") : secret "infra-operator-webhook-server-cert" not found Mar 13 13:07:12.567834 master-0 kubenswrapper[28149]: I0313 13:07:12.562472 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl99b\" (UniqueName: \"kubernetes.io/projected/033c7536-1e30-42bc-b7be-c5755276a8aa-kube-api-access-jl99b\") pod \"mariadb-operator-controller-manager-658d4cdd5-g962k\" (UID: \"033c7536-1e30-42bc-b7be-c5755276a8aa\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-g962k" Mar 13 13:07:12.567834 master-0 kubenswrapper[28149]: I0313 13:07:12.562821 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfpgl\" (UniqueName: \"kubernetes.io/projected/3f835be3-b114-4593-af89-119b729df40a-kube-api-access-dfpgl\") pod \"manila-operator-controller-manager-68f45f9d9f-cjfg7\" (UID: \"3f835be3-b114-4593-af89-119b729df40a\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-cjfg7" Mar 13 13:07:12.567834 master-0 kubenswrapper[28149]: I0313 13:07:12.562966 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd6kp\" (UniqueName: \"kubernetes.io/projected/bee4fa71-7893-41d2-8512-5d26c6da9913-kube-api-access-zd6kp\") pod \"keystone-operator-controller-manager-684f77d66d-thm6q\" (UID: \"bee4fa71-7893-41d2-8512-5d26c6da9913\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-thm6q" Mar 13 13:07:12.570550 master-0 kubenswrapper[28149]: I0313 13:07:12.568890 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-qw5dr"] Mar 13 13:07:12.570827 master-0 kubenswrapper[28149]: I0313 13:07:12.570731 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-qw5dr" Mar 13 13:07:12.575123 master-0 kubenswrapper[28149]: I0313 13:07:12.572313 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wbhb\" (UniqueName: \"kubernetes.io/projected/5a9ed8da-031e-4009-b7aa-c1dd970911c6-kube-api-access-7wbhb\") pod \"ironic-operator-controller-manager-6bbb499bbc-4qdjn\" (UID: \"5a9ed8da-031e-4009-b7aa-c1dd970911c6\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-4qdjn" Mar 13 13:07:12.614097 master-0 kubenswrapper[28149]: I0313 13:07:12.591779 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-4ns7k"] Mar 13 13:07:12.614097 master-0 kubenswrapper[28149]: I0313 13:07:12.605758 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd6kp\" (UniqueName: \"kubernetes.io/projected/bee4fa71-7893-41d2-8512-5d26c6da9913-kube-api-access-zd6kp\") pod \"keystone-operator-controller-manager-684f77d66d-thm6q\" (UID: \"bee4fa71-7893-41d2-8512-5d26c6da9913\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-thm6q" Mar 13 13:07:12.614097 master-0 kubenswrapper[28149]: I0313 13:07:12.606582 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-qw5dr"] Mar 13 13:07:12.614097 master-0 kubenswrapper[28149]: I0313 13:07:12.612799 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wbhb\" (UniqueName: \"kubernetes.io/projected/5a9ed8da-031e-4009-b7aa-c1dd970911c6-kube-api-access-7wbhb\") pod \"ironic-operator-controller-manager-6bbb499bbc-4qdjn\" (UID: \"5a9ed8da-031e-4009-b7aa-c1dd970911c6\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-4qdjn" Mar 13 13:07:12.619266 master-0 kubenswrapper[28149]: I0313 13:07:12.619214 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2vsw\" (UniqueName: \"kubernetes.io/projected/57e83807-c598-4f45-b92a-e017a07b6997-kube-api-access-z2vsw\") pod \"infra-operator-controller-manager-b8c8d7cc8-q7d6n\" (UID: \"57e83807-c598-4f45-b92a-e017a07b6997\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:07:12.678103 master-0 kubenswrapper[28149]: I0313 13:07:12.676607 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-4qdjn" Mar 13 13:07:12.678103 master-0 kubenswrapper[28149]: I0313 13:07:12.678025 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vw2h\" (UniqueName: \"kubernetes.io/projected/8418da33-bbf7-4930-8e12-07bc1172da01-kube-api-access-2vw2h\") pod \"octavia-operator-controller-manager-5f4f55cb5c-qw5dr\" (UID: \"8418da33-bbf7-4930-8e12-07bc1172da01\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-qw5dr" Mar 13 13:07:12.678623 master-0 kubenswrapper[28149]: I0313 13:07:12.678125 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl99b\" (UniqueName: \"kubernetes.io/projected/033c7536-1e30-42bc-b7be-c5755276a8aa-kube-api-access-jl99b\") pod \"mariadb-operator-controller-manager-658d4cdd5-g962k\" (UID: \"033c7536-1e30-42bc-b7be-c5755276a8aa\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-g962k" Mar 13 13:07:12.678623 master-0 kubenswrapper[28149]: I0313 13:07:12.678212 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5htl\" (UniqueName: \"kubernetes.io/projected/b0caec54-e9db-4ace-8b0d-aebafbb6608b-kube-api-access-s5htl\") pod \"neutron-operator-controller-manager-776c5696bf-5z8g2\" (UID: \"b0caec54-e9db-4ace-8b0d-aebafbb6608b\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-5z8g2" Mar 13 13:07:12.678623 master-0 kubenswrapper[28149]: I0313 13:07:12.678236 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfpgl\" (UniqueName: \"kubernetes.io/projected/3f835be3-b114-4593-af89-119b729df40a-kube-api-access-dfpgl\") pod \"manila-operator-controller-manager-68f45f9d9f-cjfg7\" (UID: \"3f835be3-b114-4593-af89-119b729df40a\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-cjfg7" Mar 13 13:07:12.678623 master-0 kubenswrapper[28149]: I0313 13:07:12.678261 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rl2j\" (UniqueName: \"kubernetes.io/projected/c0d9cf57-a057-4dd4-9d4c-d292fbcdc501-kube-api-access-2rl2j\") pod \"nova-operator-controller-manager-569cc54c5-4ns7k\" (UID: \"c0d9cf57-a057-4dd4-9d4c-d292fbcdc501\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-4ns7k" Mar 13 13:07:12.678964 master-0 kubenswrapper[28149]: I0313 13:07:12.678811 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt"] Mar 13 13:07:12.680281 master-0 kubenswrapper[28149]: I0313 13:07:12.680074 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:07:12.688392 master-0 kubenswrapper[28149]: I0313 13:07:12.687782 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 13 13:07:12.734095 master-0 kubenswrapper[28149]: I0313 13:07:12.733049 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfpgl\" (UniqueName: \"kubernetes.io/projected/3f835be3-b114-4593-af89-119b729df40a-kube-api-access-dfpgl\") pod \"manila-operator-controller-manager-68f45f9d9f-cjfg7\" (UID: \"3f835be3-b114-4593-af89-119b729df40a\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-cjfg7" Mar 13 13:07:12.740042 master-0 kubenswrapper[28149]: I0313 13:07:12.736522 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl99b\" (UniqueName: \"kubernetes.io/projected/033c7536-1e30-42bc-b7be-c5755276a8aa-kube-api-access-jl99b\") pod \"mariadb-operator-controller-manager-658d4cdd5-g962k\" (UID: \"033c7536-1e30-42bc-b7be-c5755276a8aa\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-g962k" Mar 13 13:07:12.764942 master-0 kubenswrapper[28149]: I0313 13:07:12.763865 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-dnttw"] Mar 13 13:07:12.765242 master-0 kubenswrapper[28149]: I0313 13:07:12.765188 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-dnttw"] Mar 13 13:07:12.765315 master-0 kubenswrapper[28149]: I0313 13:07:12.765294 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-dnttw" Mar 13 13:07:12.786535 master-0 kubenswrapper[28149]: I0313 13:07:12.782346 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5htl\" (UniqueName: \"kubernetes.io/projected/b0caec54-e9db-4ace-8b0d-aebafbb6608b-kube-api-access-s5htl\") pod \"neutron-operator-controller-manager-776c5696bf-5z8g2\" (UID: \"b0caec54-e9db-4ace-8b0d-aebafbb6608b\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-5z8g2" Mar 13 13:07:12.786535 master-0 kubenswrapper[28149]: I0313 13:07:12.782448 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rl2j\" (UniqueName: \"kubernetes.io/projected/c0d9cf57-a057-4dd4-9d4c-d292fbcdc501-kube-api-access-2rl2j\") pod \"nova-operator-controller-manager-569cc54c5-4ns7k\" (UID: \"c0d9cf57-a057-4dd4-9d4c-d292fbcdc501\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-4ns7k" Mar 13 13:07:12.786535 master-0 kubenswrapper[28149]: I0313 13:07:12.782572 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt\" (UID: \"162f25e3-ac79-4df1-8615-579dcf5c111e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:07:12.786535 master-0 kubenswrapper[28149]: I0313 13:07:12.782643 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8t9\" (UniqueName: \"kubernetes.io/projected/162f25e3-ac79-4df1-8615-579dcf5c111e-kube-api-access-2d8t9\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt\" (UID: \"162f25e3-ac79-4df1-8615-579dcf5c111e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:07:12.786535 master-0 kubenswrapper[28149]: I0313 13:07:12.782686 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vw2h\" (UniqueName: \"kubernetes.io/projected/8418da33-bbf7-4930-8e12-07bc1172da01-kube-api-access-2vw2h\") pod \"octavia-operator-controller-manager-5f4f55cb5c-qw5dr\" (UID: \"8418da33-bbf7-4930-8e12-07bc1172da01\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-qw5dr" Mar 13 13:07:12.786535 master-0 kubenswrapper[28149]: I0313 13:07:12.782758 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g5l8\" (UniqueName: \"kubernetes.io/projected/abc2aa99-ac15-433b-b478-711da24b8dbf-kube-api-access-8g5l8\") pod \"ovn-operator-controller-manager-bbc5b68f9-dnttw\" (UID: \"abc2aa99-ac15-433b-b478-711da24b8dbf\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-dnttw" Mar 13 13:07:12.800294 master-0 kubenswrapper[28149]: I0313 13:07:12.798310 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-thm6q" Mar 13 13:07:12.831850 master-0 kubenswrapper[28149]: I0313 13:07:12.826979 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-7nr7s"] Mar 13 13:07:12.831850 master-0 kubenswrapper[28149]: I0313 13:07:12.828369 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-7nr7s" Mar 13 13:07:12.840003 master-0 kubenswrapper[28149]: I0313 13:07:12.837099 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rl2j\" (UniqueName: \"kubernetes.io/projected/c0d9cf57-a057-4dd4-9d4c-d292fbcdc501-kube-api-access-2rl2j\") pod \"nova-operator-controller-manager-569cc54c5-4ns7k\" (UID: \"c0d9cf57-a057-4dd4-9d4c-d292fbcdc501\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-4ns7k" Mar 13 13:07:12.851996 master-0 kubenswrapper[28149]: I0313 13:07:12.850456 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt"] Mar 13 13:07:12.870342 master-0 kubenswrapper[28149]: I0313 13:07:12.870278 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-cjfg7" Mar 13 13:07:12.872984 master-0 kubenswrapper[28149]: I0313 13:07:12.872938 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-zj78q"] Mar 13 13:07:12.874071 master-0 kubenswrapper[28149]: I0313 13:07:12.874027 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-zj78q" Mar 13 13:07:12.884916 master-0 kubenswrapper[28149]: I0313 13:07:12.884561 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-g962k" Mar 13 13:07:12.889573 master-0 kubenswrapper[28149]: I0313 13:07:12.885881 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt\" (UID: \"162f25e3-ac79-4df1-8615-579dcf5c111e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:07:12.889573 master-0 kubenswrapper[28149]: I0313 13:07:12.885929 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhbt\" (UniqueName: \"kubernetes.io/projected/cb86bcb9-ed8a-4046-99ed-8c9963f4af4d-kube-api-access-2dhbt\") pod \"placement-operator-controller-manager-574d45c66c-7nr7s\" (UID: \"cb86bcb9-ed8a-4046-99ed-8c9963f4af4d\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-7nr7s" Mar 13 13:07:12.889573 master-0 kubenswrapper[28149]: I0313 13:07:12.885992 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d8t9\" (UniqueName: \"kubernetes.io/projected/162f25e3-ac79-4df1-8615-579dcf5c111e-kube-api-access-2d8t9\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt\" (UID: \"162f25e3-ac79-4df1-8615-579dcf5c111e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:07:12.889573 master-0 kubenswrapper[28149]: I0313 13:07:12.886144 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g5l8\" (UniqueName: \"kubernetes.io/projected/abc2aa99-ac15-433b-b478-711da24b8dbf-kube-api-access-8g5l8\") pod \"ovn-operator-controller-manager-bbc5b68f9-dnttw\" (UID: \"abc2aa99-ac15-433b-b478-711da24b8dbf\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-dnttw" Mar 13 13:07:12.889573 master-0 kubenswrapper[28149]: E0313 13:07:12.886724 28149 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 13:07:12.889573 master-0 kubenswrapper[28149]: E0313 13:07:12.886832 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert podName:162f25e3-ac79-4df1-8615-579dcf5c111e nodeName:}" failed. No retries permitted until 2026-03-13 13:07:13.386788622 +0000 UTC m=+807.040253821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" (UID: "162f25e3-ac79-4df1-8615-579dcf5c111e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 13:07:12.898336 master-0 kubenswrapper[28149]: I0313 13:07:12.897626 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-7nr7s"] Mar 13 13:07:12.906738 master-0 kubenswrapper[28149]: W0313 13:07:12.906637 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cb35574_9b29_4dd8_8ec5_a37816092d10.slice/crio-70700a201b31070fde978ee422b3662a5ca714327e029115eff8f4755c060738 WatchSource:0}: Error finding container 70700a201b31070fde978ee422b3662a5ca714327e029115eff8f4755c060738: Status 404 returned error can't find the container with id 70700a201b31070fde978ee422b3662a5ca714327e029115eff8f4755c060738 Mar 13 13:07:12.908529 master-0 kubenswrapper[28149]: I0313 13:07:12.907746 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g5l8\" (UniqueName: \"kubernetes.io/projected/abc2aa99-ac15-433b-b478-711da24b8dbf-kube-api-access-8g5l8\") pod \"ovn-operator-controller-manager-bbc5b68f9-dnttw\" (UID: \"abc2aa99-ac15-433b-b478-711da24b8dbf\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-dnttw" Mar 13 13:07:12.909735 master-0 kubenswrapper[28149]: I0313 13:07:12.909641 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d8t9\" (UniqueName: \"kubernetes.io/projected/162f25e3-ac79-4df1-8615-579dcf5c111e-kube-api-access-2d8t9\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt\" (UID: \"162f25e3-ac79-4df1-8615-579dcf5c111e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:07:12.920561 master-0 kubenswrapper[28149]: I0313 13:07:12.920521 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-zj78q"] Mar 13 13:07:12.922250 master-0 kubenswrapper[28149]: I0313 13:07:12.922218 28149 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 13:07:12.932330 master-0 kubenswrapper[28149]: I0313 13:07:12.932236 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-df58d"] Mar 13 13:07:12.935023 master-0 kubenswrapper[28149]: I0313 13:07:12.934981 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-df58d" Mar 13 13:07:12.954453 master-0 kubenswrapper[28149]: I0313 13:07:12.954398 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-df58d"] Mar 13 13:07:12.956119 master-0 kubenswrapper[28149]: I0313 13:07:12.956079 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-4ns7k" Mar 13 13:07:12.961127 master-0 kubenswrapper[28149]: I0313 13:07:12.960854 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5htl\" (UniqueName: \"kubernetes.io/projected/b0caec54-e9db-4ace-8b0d-aebafbb6608b-kube-api-access-s5htl\") pod \"neutron-operator-controller-manager-776c5696bf-5z8g2\" (UID: \"b0caec54-e9db-4ace-8b0d-aebafbb6608b\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-5z8g2" Mar 13 13:07:12.977441 master-0 kubenswrapper[28149]: I0313 13:07:12.977359 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7djhg"] Mar 13 13:07:12.978865 master-0 kubenswrapper[28149]: I0313 13:07:12.978840 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7djhg" Mar 13 13:07:12.982584 master-0 kubenswrapper[28149]: I0313 13:07:12.982532 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7djhg"] Mar 13 13:07:12.988180 master-0 kubenswrapper[28149]: I0313 13:07:12.986616 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vw2h\" (UniqueName: \"kubernetes.io/projected/8418da33-bbf7-4930-8e12-07bc1172da01-kube-api-access-2vw2h\") pod \"octavia-operator-controller-manager-5f4f55cb5c-qw5dr\" (UID: \"8418da33-bbf7-4930-8e12-07bc1172da01\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-qw5dr" Mar 13 13:07:12.990263 master-0 kubenswrapper[28149]: I0313 13:07:12.988553 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qw7z\" (UniqueName: \"kubernetes.io/projected/cc4c1517-f5c9-4e2e-9659-e1ad6ce7f4de-kube-api-access-7qw7z\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-df58d\" (UID: \"cc4c1517-f5c9-4e2e-9659-e1ad6ce7f4de\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-df58d" Mar 13 13:07:12.990263 master-0 kubenswrapper[28149]: I0313 13:07:12.988652 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dhbt\" (UniqueName: \"kubernetes.io/projected/cb86bcb9-ed8a-4046-99ed-8c9963f4af4d-kube-api-access-2dhbt\") pod \"placement-operator-controller-manager-574d45c66c-7nr7s\" (UID: \"cb86bcb9-ed8a-4046-99ed-8c9963f4af4d\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-7nr7s" Mar 13 13:07:12.990263 master-0 kubenswrapper[28149]: I0313 13:07:12.988788 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l54d8\" (UniqueName: \"kubernetes.io/projected/9a01f1d0-3f33-41a0-be76-39ce52e88fab-kube-api-access-l54d8\") pod \"swift-operator-controller-manager-677c674df7-zj78q\" (UID: \"9a01f1d0-3f33-41a0-be76-39ce52e88fab\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-zj78q" Mar 13 13:07:13.016948 master-0 kubenswrapper[28149]: I0313 13:07:13.016894 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-c55zw"] Mar 13 13:07:13.028187 master-0 kubenswrapper[28149]: I0313 13:07:13.024304 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-c55zw" Mar 13 13:07:13.035069 master-0 kubenswrapper[28149]: I0313 13:07:13.034554 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-qw5dr" Mar 13 13:07:13.035685 master-0 kubenswrapper[28149]: I0313 13:07:13.035610 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-c55zw"] Mar 13 13:07:13.081049 master-0 kubenswrapper[28149]: I0313 13:07:13.080993 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dhbt\" (UniqueName: \"kubernetes.io/projected/cb86bcb9-ed8a-4046-99ed-8c9963f4af4d-kube-api-access-2dhbt\") pod \"placement-operator-controller-manager-574d45c66c-7nr7s\" (UID: \"cb86bcb9-ed8a-4046-99ed-8c9963f4af4d\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-7nr7s" Mar 13 13:07:13.098359 master-0 kubenswrapper[28149]: I0313 13:07:13.098292 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wckqc\" (UniqueName: \"kubernetes.io/projected/cb979094-d28c-477a-a8c8-91d4b8eb946c-kube-api-access-wckqc\") pod \"test-operator-controller-manager-5c5cb9c4d7-7djhg\" (UID: \"cb979094-d28c-477a-a8c8-91d4b8eb946c\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7djhg" Mar 13 13:07:13.098817 master-0 kubenswrapper[28149]: I0313 13:07:13.098557 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-q7d6n\" (UID: \"57e83807-c598-4f45-b92a-e017a07b6997\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:07:13.098906 master-0 kubenswrapper[28149]: I0313 13:07:13.098876 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpgmx\" (UniqueName: \"kubernetes.io/projected/844a5475-8fda-433c-b083-26608607b8bb-kube-api-access-cpgmx\") pod \"watcher-operator-controller-manager-6dd88c6f67-c55zw\" (UID: \"844a5475-8fda-433c-b083-26608607b8bb\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-c55zw" Mar 13 13:07:13.098984 master-0 kubenswrapper[28149]: E0313 13:07:13.098968 28149 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 13:07:13.099088 master-0 kubenswrapper[28149]: E0313 13:07:13.099057 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert podName:57e83807-c598-4f45-b92a-e017a07b6997 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:14.099023012 +0000 UTC m=+807.752488171 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert") pod "infra-operator-controller-manager-b8c8d7cc8-q7d6n" (UID: "57e83807-c598-4f45-b92a-e017a07b6997") : secret "infra-operator-webhook-server-cert" not found Mar 13 13:07:13.099271 master-0 kubenswrapper[28149]: I0313 13:07:13.099226 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l54d8\" (UniqueName: \"kubernetes.io/projected/9a01f1d0-3f33-41a0-be76-39ce52e88fab-kube-api-access-l54d8\") pod \"swift-operator-controller-manager-677c674df7-zj78q\" (UID: \"9a01f1d0-3f33-41a0-be76-39ce52e88fab\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-zj78q" Mar 13 13:07:13.101732 master-0 kubenswrapper[28149]: I0313 13:07:13.099875 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qw7z\" (UniqueName: \"kubernetes.io/projected/cc4c1517-f5c9-4e2e-9659-e1ad6ce7f4de-kube-api-access-7qw7z\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-df58d\" (UID: \"cc4c1517-f5c9-4e2e-9659-e1ad6ce7f4de\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-df58d" Mar 13 13:07:13.133976 master-0 kubenswrapper[28149]: I0313 13:07:13.133926 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l54d8\" (UniqueName: \"kubernetes.io/projected/9a01f1d0-3f33-41a0-be76-39ce52e88fab-kube-api-access-l54d8\") pod \"swift-operator-controller-manager-677c674df7-zj78q\" (UID: \"9a01f1d0-3f33-41a0-be76-39ce52e88fab\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-zj78q" Mar 13 13:07:13.150877 master-0 kubenswrapper[28149]: I0313 13:07:13.150801 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk"] Mar 13 13:07:13.153038 master-0 kubenswrapper[28149]: I0313 13:07:13.153001 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:13.155076 master-0 kubenswrapper[28149]: I0313 13:07:13.154707 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qw7z\" (UniqueName: \"kubernetes.io/projected/cc4c1517-f5c9-4e2e-9659-e1ad6ce7f4de-kube-api-access-7qw7z\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-df58d\" (UID: \"cc4c1517-f5c9-4e2e-9659-e1ad6ce7f4de\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-df58d" Mar 13 13:07:13.156008 master-0 kubenswrapper[28149]: I0313 13:07:13.155976 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 13 13:07:13.158701 master-0 kubenswrapper[28149]: I0313 13:07:13.158339 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 13 13:07:13.167604 master-0 kubenswrapper[28149]: I0313 13:07:13.167269 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk"] Mar 13 13:07:13.177110 master-0 kubenswrapper[28149]: I0313 13:07:13.177051 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-dnttw" Mar 13 13:07:13.207733 master-0 kubenswrapper[28149]: I0313 13:07:13.206443 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:13.207733 master-0 kubenswrapper[28149]: I0313 13:07:13.206521 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxns\" (UniqueName: \"kubernetes.io/projected/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-kube-api-access-mfxns\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:13.207733 master-0 kubenswrapper[28149]: I0313 13:07:13.206657 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wckqc\" (UniqueName: \"kubernetes.io/projected/cb979094-d28c-477a-a8c8-91d4b8eb946c-kube-api-access-wckqc\") pod \"test-operator-controller-manager-5c5cb9c4d7-7djhg\" (UID: \"cb979094-d28c-477a-a8c8-91d4b8eb946c\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7djhg" Mar 13 13:07:13.207733 master-0 kubenswrapper[28149]: I0313 13:07:13.206721 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpgmx\" (UniqueName: \"kubernetes.io/projected/844a5475-8fda-433c-b083-26608607b8bb-kube-api-access-cpgmx\") pod \"watcher-operator-controller-manager-6dd88c6f67-c55zw\" (UID: \"844a5475-8fda-433c-b083-26608607b8bb\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-c55zw" Mar 13 13:07:13.207733 master-0 kubenswrapper[28149]: I0313 13:07:13.206748 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:13.211950 master-0 kubenswrapper[28149]: I0313 13:07:13.210590 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-5z8g2" Mar 13 13:07:13.230001 master-0 kubenswrapper[28149]: I0313 13:07:13.224319 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-7nr7s" Mar 13 13:07:13.236298 master-0 kubenswrapper[28149]: I0313 13:07:13.236254 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpgmx\" (UniqueName: \"kubernetes.io/projected/844a5475-8fda-433c-b083-26608607b8bb-kube-api-access-cpgmx\") pod \"watcher-operator-controller-manager-6dd88c6f67-c55zw\" (UID: \"844a5475-8fda-433c-b083-26608607b8bb\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-c55zw" Mar 13 13:07:13.236406 master-0 kubenswrapper[28149]: I0313 13:07:13.236325 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2wcnq"] Mar 13 13:07:13.241111 master-0 kubenswrapper[28149]: I0313 13:07:13.240319 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2wcnq" Mar 13 13:07:13.269895 master-0 kubenswrapper[28149]: I0313 13:07:13.269015 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-zj78q" Mar 13 13:07:13.305296 master-0 kubenswrapper[28149]: I0313 13:07:13.304999 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2wcnq"] Mar 13 13:07:13.309562 master-0 kubenswrapper[28149]: I0313 13:07:13.308377 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wckqc\" (UniqueName: \"kubernetes.io/projected/cb979094-d28c-477a-a8c8-91d4b8eb946c-kube-api-access-wckqc\") pod \"test-operator-controller-manager-5c5cb9c4d7-7djhg\" (UID: \"cb979094-d28c-477a-a8c8-91d4b8eb946c\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7djhg" Mar 13 13:07:13.309562 master-0 kubenswrapper[28149]: I0313 13:07:13.308651 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:13.309562 master-0 kubenswrapper[28149]: I0313 13:07:13.308779 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s49mh\" (UniqueName: \"kubernetes.io/projected/ce06c419-6c1e-4a1d-b8bc-e8f96a9195f6-kube-api-access-s49mh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2wcnq\" (UID: \"ce06c419-6c1e-4a1d-b8bc-e8f96a9195f6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2wcnq" Mar 13 13:07:13.309562 master-0 kubenswrapper[28149]: I0313 13:07:13.308823 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:13.309562 master-0 kubenswrapper[28149]: I0313 13:07:13.308858 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxns\" (UniqueName: \"kubernetes.io/projected/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-kube-api-access-mfxns\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:13.309562 master-0 kubenswrapper[28149]: E0313 13:07:13.308913 28149 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 13:07:13.309562 master-0 kubenswrapper[28149]: E0313 13:07:13.309028 28149 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 13:07:13.309562 master-0 kubenswrapper[28149]: E0313 13:07:13.309081 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs podName:c3b5a392-dca1-4bb6-b234-30d8eb87ae21 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:13.809001679 +0000 UTC m=+807.462466858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-gtljk" (UID: "c3b5a392-dca1-4bb6-b234-30d8eb87ae21") : secret "metrics-server-cert" not found Mar 13 13:07:13.309562 master-0 kubenswrapper[28149]: E0313 13:07:13.309188 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs podName:c3b5a392-dca1-4bb6-b234-30d8eb87ae21 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:13.809098651 +0000 UTC m=+807.462563890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-gtljk" (UID: "c3b5a392-dca1-4bb6-b234-30d8eb87ae21") : secret "webhook-server-cert" not found Mar 13 13:07:13.384177 master-0 kubenswrapper[28149]: I0313 13:07:13.383755 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-df58d" Mar 13 13:07:13.391298 master-0 kubenswrapper[28149]: I0313 13:07:13.388823 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-bql4k" event={"ID":"7cb35574-9b29-4dd8-8ec5-a37816092d10","Type":"ContainerStarted","Data":"70700a201b31070fde978ee422b3662a5ca714327e029115eff8f4755c060738"} Mar 13 13:07:13.396853 master-0 kubenswrapper[28149]: I0313 13:07:13.396824 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7djhg" Mar 13 13:07:13.406995 master-0 kubenswrapper[28149]: I0313 13:07:13.406935 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxns\" (UniqueName: \"kubernetes.io/projected/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-kube-api-access-mfxns\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:13.418351 master-0 kubenswrapper[28149]: I0313 13:07:13.418302 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-bql4k"] Mar 13 13:07:13.421012 master-0 kubenswrapper[28149]: I0313 13:07:13.420787 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt\" (UID: \"162f25e3-ac79-4df1-8615-579dcf5c111e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:07:13.422696 master-0 kubenswrapper[28149]: I0313 13:07:13.422596 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s49mh\" (UniqueName: \"kubernetes.io/projected/ce06c419-6c1e-4a1d-b8bc-e8f96a9195f6-kube-api-access-s49mh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2wcnq\" (UID: \"ce06c419-6c1e-4a1d-b8bc-e8f96a9195f6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2wcnq" Mar 13 13:07:13.426248 master-0 kubenswrapper[28149]: E0313 13:07:13.423742 28149 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 13:07:13.426248 master-0 kubenswrapper[28149]: E0313 13:07:13.423815 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert podName:162f25e3-ac79-4df1-8615-579dcf5c111e nodeName:}" failed. No retries permitted until 2026-03-13 13:07:14.423792298 +0000 UTC m=+808.077257527 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" (UID: "162f25e3-ac79-4df1-8615-579dcf5c111e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 13:07:13.450187 master-0 kubenswrapper[28149]: I0313 13:07:13.449533 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s49mh\" (UniqueName: \"kubernetes.io/projected/ce06c419-6c1e-4a1d-b8bc-e8f96a9195f6-kube-api-access-s49mh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2wcnq\" (UID: \"ce06c419-6c1e-4a1d-b8bc-e8f96a9195f6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2wcnq" Mar 13 13:07:13.510005 master-0 kubenswrapper[28149]: I0313 13:07:13.509952 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-c55zw" Mar 13 13:07:13.569526 master-0 kubenswrapper[28149]: I0313 13:07:13.569475 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-rm4bg"] Mar 13 13:07:13.632884 master-0 kubenswrapper[28149]: I0313 13:07:13.632755 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2wcnq" Mar 13 13:07:13.639517 master-0 kubenswrapper[28149]: I0313 13:07:13.638069 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-pdfdc"] Mar 13 13:07:13.725296 master-0 kubenswrapper[28149]: I0313 13:07:13.721870 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-b6gkp"] Mar 13 13:07:13.871546 master-0 kubenswrapper[28149]: I0313 13:07:13.871452 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:13.871654 master-0 kubenswrapper[28149]: I0313 13:07:13.871586 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:13.871893 master-0 kubenswrapper[28149]: E0313 13:07:13.871808 28149 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 13:07:13.871893 master-0 kubenswrapper[28149]: E0313 13:07:13.871886 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs podName:c3b5a392-dca1-4bb6-b234-30d8eb87ae21 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:14.871864147 +0000 UTC m=+808.525329316 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-gtljk" (UID: "c3b5a392-dca1-4bb6-b234-30d8eb87ae21") : secret "webhook-server-cert" not found Mar 13 13:07:13.872462 master-0 kubenswrapper[28149]: E0313 13:07:13.872376 28149 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 13:07:13.872462 master-0 kubenswrapper[28149]: E0313 13:07:13.872415 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs podName:c3b5a392-dca1-4bb6-b234-30d8eb87ae21 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:14.872404752 +0000 UTC m=+808.525869911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-gtljk" (UID: "c3b5a392-dca1-4bb6-b234-30d8eb87ae21") : secret "metrics-server-cert" not found Mar 13 13:07:14.056218 master-0 kubenswrapper[28149]: I0313 13:07:14.055061 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-pwqm5"] Mar 13 13:07:14.062526 master-0 kubenswrapper[28149]: W0313 13:07:14.062413 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2e8c53d_29c4_4a63_b701_8103253c197a.slice/crio-9d0ff3c67814de769bcd88341aa3831c2c51b83fa8f5d64a1a42c1e712039774 WatchSource:0}: Error finding container 9d0ff3c67814de769bcd88341aa3831c2c51b83fa8f5d64a1a42c1e712039774: Status 404 returned error can't find the container with id 9d0ff3c67814de769bcd88341aa3831c2c51b83fa8f5d64a1a42c1e712039774 Mar 13 13:07:14.070601 master-0 kubenswrapper[28149]: I0313 13:07:14.070566 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-4qdjn"] Mar 13 13:07:14.130837 master-0 kubenswrapper[28149]: I0313 13:07:14.130188 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-thm6q"] Mar 13 13:07:14.183332 master-0 kubenswrapper[28149]: I0313 13:07:14.173674 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-49xpx"] Mar 13 13:07:14.193700 master-0 kubenswrapper[28149]: I0313 13:07:14.193660 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-q7d6n\" (UID: \"57e83807-c598-4f45-b92a-e017a07b6997\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:07:14.194397 master-0 kubenswrapper[28149]: E0313 13:07:14.193904 28149 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 13:07:14.210360 master-0 kubenswrapper[28149]: E0313 13:07:14.208509 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert podName:57e83807-c598-4f45-b92a-e017a07b6997 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:16.208465559 +0000 UTC m=+809.861930718 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert") pod "infra-operator-controller-manager-b8c8d7cc8-q7d6n" (UID: "57e83807-c598-4f45-b92a-e017a07b6997") : secret "infra-operator-webhook-server-cert" not found Mar 13 13:07:14.219849 master-0 kubenswrapper[28149]: I0313 13:07:14.219668 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-cjfg7"] Mar 13 13:07:14.225547 master-0 kubenswrapper[28149]: W0313 13:07:14.225480 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f835be3_b114_4593_af89_119b729df40a.slice/crio-f2084f3f734e0d0767e8ef3fcb0e0a894cd53f6bf34a45ae7fdc9a6356e858dd WatchSource:0}: Error finding container f2084f3f734e0d0767e8ef3fcb0e0a894cd53f6bf34a45ae7fdc9a6356e858dd: Status 404 returned error can't find the container with id f2084f3f734e0d0767e8ef3fcb0e0a894cd53f6bf34a45ae7fdc9a6356e858dd Mar 13 13:07:14.408443 master-0 kubenswrapper[28149]: I0313 13:07:14.408385 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-pwqm5" event={"ID":"a2e8c53d-29c4-4a63-b701-8103253c197a","Type":"ContainerStarted","Data":"9d0ff3c67814de769bcd88341aa3831c2c51b83fa8f5d64a1a42c1e712039774"} Mar 13 13:07:14.410422 master-0 kubenswrapper[28149]: I0313 13:07:14.410341 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-thm6q" event={"ID":"bee4fa71-7893-41d2-8512-5d26c6da9913","Type":"ContainerStarted","Data":"1e7ccf5c53001bccd41c9f27a6bb5757e91497e511abcdffaa1c1769e212091f"} Mar 13 13:07:14.413324 master-0 kubenswrapper[28149]: I0313 13:07:14.413266 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-rm4bg" event={"ID":"70ec85c2-6c2a-44c1-b172-c851765912b5","Type":"ContainerStarted","Data":"aa199e9644d1212a6e5edfb2c33e65b25d3d6b7eacf8ed265a99994008cb23a1"} Mar 13 13:07:14.414752 master-0 kubenswrapper[28149]: I0313 13:07:14.414710 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-49xpx" event={"ID":"94a2c09f-a0f4-4ab0-8bae-116dc938de70","Type":"ContainerStarted","Data":"df839afb7ef8f050cb4f2409424a6f4ef7f7c610a9d78f361efcaaa7598171a7"} Mar 13 13:07:14.415831 master-0 kubenswrapper[28149]: I0313 13:07:14.415799 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-4qdjn" event={"ID":"5a9ed8da-031e-4009-b7aa-c1dd970911c6","Type":"ContainerStarted","Data":"2770ecf8dacddd5f08d03dc476b893a4bf404841f68f9b64dd3ae27671cd53ce"} Mar 13 13:07:14.416989 master-0 kubenswrapper[28149]: I0313 13:07:14.416887 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-b6gkp" event={"ID":"24d18dc3-f6a0-4b38-b027-1f92534d6201","Type":"ContainerStarted","Data":"34263eb3a08fb8c04bbe3958364174aec145afab6ae29f73878d6d609d058b17"} Mar 13 13:07:14.418202 master-0 kubenswrapper[28149]: I0313 13:07:14.418164 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-cjfg7" event={"ID":"3f835be3-b114-4593-af89-119b729df40a","Type":"ContainerStarted","Data":"f2084f3f734e0d0767e8ef3fcb0e0a894cd53f6bf34a45ae7fdc9a6356e858dd"} Mar 13 13:07:14.420708 master-0 kubenswrapper[28149]: I0313 13:07:14.420680 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-pdfdc" event={"ID":"e853709f-9f1c-4e4b-b43d-3d6f8685b563","Type":"ContainerStarted","Data":"947f1c2c39d4c422fef248af8b5712e21412d96e715efd6e41743a85828bf850"} Mar 13 13:07:14.500486 master-0 kubenswrapper[28149]: I0313 13:07:14.500337 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt\" (UID: \"162f25e3-ac79-4df1-8615-579dcf5c111e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:07:14.500720 master-0 kubenswrapper[28149]: E0313 13:07:14.500588 28149 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 13:07:14.500720 master-0 kubenswrapper[28149]: E0313 13:07:14.500697 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert podName:162f25e3-ac79-4df1-8615-579dcf5c111e nodeName:}" failed. No retries permitted until 2026-03-13 13:07:16.500674257 +0000 UTC m=+810.154139416 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" (UID: "162f25e3-ac79-4df1-8615-579dcf5c111e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 13:07:14.904089 master-0 kubenswrapper[28149]: I0313 13:07:14.904035 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-g962k"] Mar 13 13:07:14.918474 master-0 kubenswrapper[28149]: I0313 13:07:14.918426 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:14.918757 master-0 kubenswrapper[28149]: I0313 13:07:14.918732 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:14.918965 master-0 kubenswrapper[28149]: E0313 13:07:14.918922 28149 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 13:07:14.919072 master-0 kubenswrapper[28149]: E0313 13:07:14.918980 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs podName:c3b5a392-dca1-4bb6-b234-30d8eb87ae21 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:16.918964334 +0000 UTC m=+810.572429493 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-gtljk" (UID: "c3b5a392-dca1-4bb6-b234-30d8eb87ae21") : secret "metrics-server-cert" not found Mar 13 13:07:14.919199 master-0 kubenswrapper[28149]: E0313 13:07:14.919126 28149 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 13:07:14.919270 master-0 kubenswrapper[28149]: E0313 13:07:14.919226 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs podName:c3b5a392-dca1-4bb6-b234-30d8eb87ae21 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:16.919205211 +0000 UTC m=+810.572670370 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-gtljk" (UID: "c3b5a392-dca1-4bb6-b234-30d8eb87ae21") : secret "webhook-server-cert" not found Mar 13 13:07:14.950816 master-0 kubenswrapper[28149]: I0313 13:07:14.950744 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-7nr7s"] Mar 13 13:07:15.000794 master-0 kubenswrapper[28149]: I0313 13:07:15.000375 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-qw5dr"] Mar 13 13:07:15.014673 master-0 kubenswrapper[28149]: I0313 13:07:15.014604 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-5z8g2"] Mar 13 13:07:15.092968 master-0 kubenswrapper[28149]: I0313 13:07:15.088439 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-4ns7k"] Mar 13 13:07:15.113844 master-0 kubenswrapper[28149]: I0313 13:07:15.113708 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-dnttw"] Mar 13 13:07:15.145274 master-0 kubenswrapper[28149]: I0313 13:07:15.145226 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-zj78q"] Mar 13 13:07:15.434002 master-0 kubenswrapper[28149]: I0313 13:07:15.433511 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-zj78q" event={"ID":"9a01f1d0-3f33-41a0-be76-39ce52e88fab","Type":"ContainerStarted","Data":"f219c84532525dd6efdca81432e309768f1e0f9481c95750cf88f10634f57335"} Mar 13 13:07:15.439648 master-0 kubenswrapper[28149]: I0313 13:07:15.439384 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-7nr7s" event={"ID":"cb86bcb9-ed8a-4046-99ed-8c9963f4af4d","Type":"ContainerStarted","Data":"8a774892506ba7b241e2a1cbe613e563567c49776b3bba05987de02bd394b6d9"} Mar 13 13:07:15.442567 master-0 kubenswrapper[28149]: I0313 13:07:15.442521 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-dnttw" event={"ID":"abc2aa99-ac15-433b-b478-711da24b8dbf","Type":"ContainerStarted","Data":"8e6d3f1794bc38c592cb3bdc6193b33c2690f0d62c532cb765f9527e490afdd2"} Mar 13 13:07:15.445780 master-0 kubenswrapper[28149]: I0313 13:07:15.445343 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-5z8g2" event={"ID":"b0caec54-e9db-4ace-8b0d-aebafbb6608b","Type":"ContainerStarted","Data":"5f808f7a69c18ca9548c9020400b1e0d8bcd2997d0f47c6976e75a446113ab74"} Mar 13 13:07:15.447070 master-0 kubenswrapper[28149]: I0313 13:07:15.447018 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-qw5dr" event={"ID":"8418da33-bbf7-4930-8e12-07bc1172da01","Type":"ContainerStarted","Data":"cbdf9caccd5ea9a971abac32f52a243d32318fe1a00ed7695f02caadf6bdf6ca"} Mar 13 13:07:15.450704 master-0 kubenswrapper[28149]: I0313 13:07:15.450650 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-4ns7k" event={"ID":"c0d9cf57-a057-4dd4-9d4c-d292fbcdc501","Type":"ContainerStarted","Data":"36ec8fbf48e4eca394f28e5cd2e1c2b291f6ca74c42e6bb14140dffa649f186f"} Mar 13 13:07:15.452376 master-0 kubenswrapper[28149]: I0313 13:07:15.452290 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-g962k" event={"ID":"033c7536-1e30-42bc-b7be-c5755276a8aa","Type":"ContainerStarted","Data":"4af3886dd7ea05fa984b3bd219eb18efaaf2f5e8e7a542f39d5c854e2a094589"} Mar 13 13:07:15.526657 master-0 kubenswrapper[28149]: I0313 13:07:15.525876 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7djhg"] Mar 13 13:07:15.548556 master-0 kubenswrapper[28149]: I0313 13:07:15.547509 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-df58d"] Mar 13 13:07:15.567014 master-0 kubenswrapper[28149]: I0313 13:07:15.562512 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-c55zw"] Mar 13 13:07:15.582368 master-0 kubenswrapper[28149]: I0313 13:07:15.582323 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2wcnq"] Mar 13 13:07:16.304663 master-0 kubenswrapper[28149]: I0313 13:07:16.304503 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-q7d6n\" (UID: \"57e83807-c598-4f45-b92a-e017a07b6997\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:07:16.305284 master-0 kubenswrapper[28149]: E0313 13:07:16.304797 28149 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 13:07:16.305284 master-0 kubenswrapper[28149]: E0313 13:07:16.304873 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert podName:57e83807-c598-4f45-b92a-e017a07b6997 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:20.304845784 +0000 UTC m=+813.958310943 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert") pod "infra-operator-controller-manager-b8c8d7cc8-q7d6n" (UID: "57e83807-c598-4f45-b92a-e017a07b6997") : secret "infra-operator-webhook-server-cert" not found Mar 13 13:07:16.490699 master-0 kubenswrapper[28149]: I0313 13:07:16.490636 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-c55zw" event={"ID":"844a5475-8fda-433c-b083-26608607b8bb","Type":"ContainerStarted","Data":"28c856b811868f80509487297ace71073e838fdb24f41706d408d07593f7ae2c"} Mar 13 13:07:16.497502 master-0 kubenswrapper[28149]: I0313 13:07:16.495599 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2wcnq" event={"ID":"ce06c419-6c1e-4a1d-b8bc-e8f96a9195f6","Type":"ContainerStarted","Data":"797097af4056a6e75b18ae127cf4f75e2b82a834683b1c5dc0694c3543411c99"} Mar 13 13:07:16.498527 master-0 kubenswrapper[28149]: I0313 13:07:16.498488 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7djhg" event={"ID":"cb979094-d28c-477a-a8c8-91d4b8eb946c","Type":"ContainerStarted","Data":"a4f56528e4765e89d044b501e473f5f26ba8fa44af6e7e8a80d585e36ff3c3ff"} Mar 13 13:07:16.501523 master-0 kubenswrapper[28149]: I0313 13:07:16.500153 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-df58d" event={"ID":"cc4c1517-f5c9-4e2e-9659-e1ad6ce7f4de","Type":"ContainerStarted","Data":"23e76d03d3eb38eaf62b7db771ad09d562334ff169a1954fa73f8ad6668794ba"} Mar 13 13:07:16.508992 master-0 kubenswrapper[28149]: I0313 13:07:16.508955 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt\" (UID: \"162f25e3-ac79-4df1-8615-579dcf5c111e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:07:16.509333 master-0 kubenswrapper[28149]: E0313 13:07:16.509275 28149 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 13:07:16.509425 master-0 kubenswrapper[28149]: E0313 13:07:16.509408 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert podName:162f25e3-ac79-4df1-8615-579dcf5c111e nodeName:}" failed. No retries permitted until 2026-03-13 13:07:20.50936783 +0000 UTC m=+814.162832989 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" (UID: "162f25e3-ac79-4df1-8615-579dcf5c111e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 13:07:17.035480 master-0 kubenswrapper[28149]: I0313 13:07:17.035405 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:17.036261 master-0 kubenswrapper[28149]: E0313 13:07:17.036208 28149 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 13:07:17.036335 master-0 kubenswrapper[28149]: E0313 13:07:17.036324 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs podName:c3b5a392-dca1-4bb6-b234-30d8eb87ae21 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:21.036299238 +0000 UTC m=+814.689764397 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-gtljk" (UID: "c3b5a392-dca1-4bb6-b234-30d8eb87ae21") : secret "webhook-server-cert" not found Mar 13 13:07:17.071695 master-0 kubenswrapper[28149]: I0313 13:07:17.071563 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:17.071977 master-0 kubenswrapper[28149]: E0313 13:07:17.071918 28149 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 13:07:17.072049 master-0 kubenswrapper[28149]: E0313 13:07:17.071989 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs podName:c3b5a392-dca1-4bb6-b234-30d8eb87ae21 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:21.071966342 +0000 UTC m=+814.725431501 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-gtljk" (UID: "c3b5a392-dca1-4bb6-b234-30d8eb87ae21") : secret "metrics-server-cert" not found Mar 13 13:07:20.346594 master-0 kubenswrapper[28149]: I0313 13:07:20.346529 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-q7d6n\" (UID: \"57e83807-c598-4f45-b92a-e017a07b6997\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:07:20.347435 master-0 kubenswrapper[28149]: E0313 13:07:20.346799 28149 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 13:07:20.347435 master-0 kubenswrapper[28149]: E0313 13:07:20.346881 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert podName:57e83807-c598-4f45-b92a-e017a07b6997 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:28.346860262 +0000 UTC m=+822.000325411 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert") pod "infra-operator-controller-manager-b8c8d7cc8-q7d6n" (UID: "57e83807-c598-4f45-b92a-e017a07b6997") : secret "infra-operator-webhook-server-cert" not found Mar 13 13:07:20.572301 master-0 kubenswrapper[28149]: I0313 13:07:20.572242 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt\" (UID: \"162f25e3-ac79-4df1-8615-579dcf5c111e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:07:20.574037 master-0 kubenswrapper[28149]: E0313 13:07:20.572655 28149 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 13:07:20.574037 master-0 kubenswrapper[28149]: E0313 13:07:20.572752 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert podName:162f25e3-ac79-4df1-8615-579dcf5c111e nodeName:}" failed. No retries permitted until 2026-03-13 13:07:28.572727367 +0000 UTC m=+822.226192526 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" (UID: "162f25e3-ac79-4df1-8615-579dcf5c111e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 13:07:21.046154 master-0 kubenswrapper[28149]: I0313 13:07:21.046080 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:21.046459 master-0 kubenswrapper[28149]: E0313 13:07:21.046272 28149 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 13:07:21.046459 master-0 kubenswrapper[28149]: E0313 13:07:21.046358 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs podName:c3b5a392-dca1-4bb6-b234-30d8eb87ae21 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:29.046336572 +0000 UTC m=+822.699801731 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-gtljk" (UID: "c3b5a392-dca1-4bb6-b234-30d8eb87ae21") : secret "webhook-server-cert" not found Mar 13 13:07:21.148341 master-0 kubenswrapper[28149]: I0313 13:07:21.148066 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:21.148806 master-0 kubenswrapper[28149]: E0313 13:07:21.148778 28149 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 13:07:21.148959 master-0 kubenswrapper[28149]: E0313 13:07:21.148948 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs podName:c3b5a392-dca1-4bb6-b234-30d8eb87ae21 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:29.148928654 +0000 UTC m=+822.802393813 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-gtljk" (UID: "c3b5a392-dca1-4bb6-b234-30d8eb87ae21") : secret "metrics-server-cert" not found Mar 13 13:07:28.397546 master-0 kubenswrapper[28149]: I0313 13:07:28.396683 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-q7d6n\" (UID: \"57e83807-c598-4f45-b92a-e017a07b6997\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:07:28.397546 master-0 kubenswrapper[28149]: E0313 13:07:28.397015 28149 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 13:07:28.397546 master-0 kubenswrapper[28149]: E0313 13:07:28.397104 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert podName:57e83807-c598-4f45-b92a-e017a07b6997 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:44.397077783 +0000 UTC m=+838.050542942 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert") pod "infra-operator-controller-manager-b8c8d7cc8-q7d6n" (UID: "57e83807-c598-4f45-b92a-e017a07b6997") : secret "infra-operator-webhook-server-cert" not found Mar 13 13:07:28.601522 master-0 kubenswrapper[28149]: I0313 13:07:28.601460 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt\" (UID: \"162f25e3-ac79-4df1-8615-579dcf5c111e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:07:28.601772 master-0 kubenswrapper[28149]: E0313 13:07:28.601647 28149 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 13:07:28.601772 master-0 kubenswrapper[28149]: E0313 13:07:28.601706 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert podName:162f25e3-ac79-4df1-8615-579dcf5c111e nodeName:}" failed. No retries permitted until 2026-03-13 13:07:44.601688781 +0000 UTC m=+838.255153940 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" (UID: "162f25e3-ac79-4df1-8615-579dcf5c111e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 13:07:29.114353 master-0 kubenswrapper[28149]: I0313 13:07:29.114301 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:29.118596 master-0 kubenswrapper[28149]: I0313 13:07:29.118540 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:29.217714 master-0 kubenswrapper[28149]: I0313 13:07:29.217242 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:29.217714 master-0 kubenswrapper[28149]: E0313 13:07:29.217452 28149 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 13:07:29.217714 master-0 kubenswrapper[28149]: E0313 13:07:29.217548 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs podName:c3b5a392-dca1-4bb6-b234-30d8eb87ae21 nodeName:}" failed. No retries permitted until 2026-03-13 13:07:45.217526933 +0000 UTC m=+838.870992092 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-gtljk" (UID: "c3b5a392-dca1-4bb6-b234-30d8eb87ae21") : secret "metrics-server-cert" not found Mar 13 13:07:44.483316 master-0 kubenswrapper[28149]: I0313 13:07:44.483238 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-q7d6n\" (UID: \"57e83807-c598-4f45-b92a-e017a07b6997\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:07:44.487032 master-0 kubenswrapper[28149]: I0313 13:07:44.486990 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57e83807-c598-4f45-b92a-e017a07b6997-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-q7d6n\" (UID: \"57e83807-c598-4f45-b92a-e017a07b6997\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:07:44.688387 master-0 kubenswrapper[28149]: I0313 13:07:44.687967 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt\" (UID: \"162f25e3-ac79-4df1-8615-579dcf5c111e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:07:44.691926 master-0 kubenswrapper[28149]: I0313 13:07:44.691859 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/162f25e3-ac79-4df1-8615-579dcf5c111e-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt\" (UID: \"162f25e3-ac79-4df1-8615-579dcf5c111e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:07:44.734723 master-0 kubenswrapper[28149]: I0313 13:07:44.734228 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:07:44.919941 master-0 kubenswrapper[28149]: I0313 13:07:44.919888 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:07:45.304883 master-0 kubenswrapper[28149]: I0313 13:07:45.304748 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:45.315036 master-0 kubenswrapper[28149]: I0313 13:07:45.314928 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3b5a392-dca1-4bb6-b234-30d8eb87ae21-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-gtljk\" (UID: \"c3b5a392-dca1-4bb6-b234-30d8eb87ae21\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:45.583383 master-0 kubenswrapper[28149]: I0313 13:07:45.583295 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:46.028748 master-0 kubenswrapper[28149]: I0313 13:07:46.028678 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-g962k" event={"ID":"033c7536-1e30-42bc-b7be-c5755276a8aa","Type":"ContainerStarted","Data":"a752dac31aa7b3fb218d13770e4fa52e0f4224afe3e32b069a243984460383f3"} Mar 13 13:07:46.032299 master-0 kubenswrapper[28149]: I0313 13:07:46.032061 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-g962k" Mar 13 13:07:46.038422 master-0 kubenswrapper[28149]: I0313 13:07:46.038367 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-pdfdc" event={"ID":"e853709f-9f1c-4e4b-b43d-3d6f8685b563","Type":"ContainerStarted","Data":"d6064906aaaa83a37694529f13b409be1cbbee06c6c266c714c18477093b0038"} Mar 13 13:07:46.039493 master-0 kubenswrapper[28149]: I0313 13:07:46.039470 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-pdfdc" Mar 13 13:07:46.043733 master-0 kubenswrapper[28149]: I0313 13:07:46.043671 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-49xpx" event={"ID":"94a2c09f-a0f4-4ab0-8bae-116dc938de70","Type":"ContainerStarted","Data":"057ac1392205528223715b7b6379346eb7ba2693b056398a063c209a28d06eea"} Mar 13 13:07:46.044777 master-0 kubenswrapper[28149]: I0313 13:07:46.044748 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-49xpx" Mar 13 13:07:46.046439 master-0 kubenswrapper[28149]: I0313 13:07:46.046405 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-4qdjn" event={"ID":"5a9ed8da-031e-4009-b7aa-c1dd970911c6","Type":"ContainerStarted","Data":"426ca545bbd5468e40fbd3778fb5343a97cfd56394f06d9b797d496a970b635f"} Mar 13 13:07:46.047020 master-0 kubenswrapper[28149]: I0313 13:07:46.046992 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-4qdjn" Mar 13 13:07:46.049246 master-0 kubenswrapper[28149]: I0313 13:07:46.048301 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-bql4k" event={"ID":"7cb35574-9b29-4dd8-8ec5-a37816092d10","Type":"ContainerStarted","Data":"23c6f791f065074086ba6e65ee7c2723e8e5dc7bb907b0da4992fa7c2f523e7a"} Mar 13 13:07:46.049246 master-0 kubenswrapper[28149]: I0313 13:07:46.049014 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-bql4k" Mar 13 13:07:46.050290 master-0 kubenswrapper[28149]: I0313 13:07:46.050248 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt"] Mar 13 13:07:46.082594 master-0 kubenswrapper[28149]: I0313 13:07:46.081626 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-g962k" podStartSLOduration=16.423372571 podStartE2EDuration="35.081598597s" podCreationTimestamp="2026-03-13 13:07:11 +0000 UTC" firstStartedPulling="2026-03-13 13:07:14.918233154 +0000 UTC m=+808.571698313" lastFinishedPulling="2026-03-13 13:07:33.57645918 +0000 UTC m=+827.229924339" observedRunningTime="2026-03-13 13:07:46.055611771 +0000 UTC m=+839.709076940" watchObservedRunningTime="2026-03-13 13:07:46.081598597 +0000 UTC m=+839.735063756" Mar 13 13:07:46.112196 master-0 kubenswrapper[28149]: I0313 13:07:46.105822 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-4qdjn" podStartSLOduration=15.586683833 podStartE2EDuration="35.105788445s" podCreationTimestamp="2026-03-13 13:07:11 +0000 UTC" firstStartedPulling="2026-03-13 13:07:14.057357038 +0000 UTC m=+807.710822197" lastFinishedPulling="2026-03-13 13:07:33.57646165 +0000 UTC m=+827.229926809" observedRunningTime="2026-03-13 13:07:46.084732784 +0000 UTC m=+839.738197943" watchObservedRunningTime="2026-03-13 13:07:46.105788445 +0000 UTC m=+839.759253604" Mar 13 13:07:46.137021 master-0 kubenswrapper[28149]: I0313 13:07:46.136758 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n"] Mar 13 13:07:46.154704 master-0 kubenswrapper[28149]: I0313 13:07:46.152120 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-bql4k" podStartSLOduration=2.695980351 podStartE2EDuration="35.152090754s" podCreationTimestamp="2026-03-13 13:07:11 +0000 UTC" firstStartedPulling="2026-03-13 13:07:12.922110937 +0000 UTC m=+806.575576096" lastFinishedPulling="2026-03-13 13:07:45.37822134 +0000 UTC m=+839.031686499" observedRunningTime="2026-03-13 13:07:46.130860177 +0000 UTC m=+839.784325346" watchObservedRunningTime="2026-03-13 13:07:46.152090754 +0000 UTC m=+839.805555913" Mar 13 13:07:46.237211 master-0 kubenswrapper[28149]: I0313 13:07:46.237110 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-pdfdc" podStartSLOduration=17.200300621 podStartE2EDuration="35.23708652s" podCreationTimestamp="2026-03-13 13:07:11 +0000 UTC" firstStartedPulling="2026-03-13 13:07:13.631697307 +0000 UTC m=+807.285162466" lastFinishedPulling="2026-03-13 13:07:31.668483206 +0000 UTC m=+825.321948365" observedRunningTime="2026-03-13 13:07:46.176995721 +0000 UTC m=+839.830460890" watchObservedRunningTime="2026-03-13 13:07:46.23708652 +0000 UTC m=+839.890551679" Mar 13 13:07:46.284480 master-0 kubenswrapper[28149]: I0313 13:07:46.284398 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-49xpx" podStartSLOduration=5.116211347 podStartE2EDuration="35.284374566s" podCreationTimestamp="2026-03-13 13:07:11 +0000 UTC" firstStartedPulling="2026-03-13 13:07:14.215828053 +0000 UTC m=+807.869293212" lastFinishedPulling="2026-03-13 13:07:44.383991272 +0000 UTC m=+838.037456431" observedRunningTime="2026-03-13 13:07:46.2015853 +0000 UTC m=+839.855050469" watchObservedRunningTime="2026-03-13 13:07:46.284374566 +0000 UTC m=+839.937839725" Mar 13 13:07:47.155094 master-0 kubenswrapper[28149]: I0313 13:07:47.155037 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" event={"ID":"57e83807-c598-4f45-b92a-e017a07b6997","Type":"ContainerStarted","Data":"7fcf03887f756cf271f0fa413cbf2156b37f7e7bfa916930cb4ff1a7f504f4fb"} Mar 13 13:07:47.210406 master-0 kubenswrapper[28149]: I0313 13:07:47.210318 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk"] Mar 13 13:07:47.218566 master-0 kubenswrapper[28149]: I0313 13:07:47.213261 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7djhg" event={"ID":"cb979094-d28c-477a-a8c8-91d4b8eb946c","Type":"ContainerStarted","Data":"09e72bcab05907988eb904ac58f72cee57e4835c5c7015e866379448f5d11010"} Mar 13 13:07:47.218566 master-0 kubenswrapper[28149]: I0313 13:07:47.213794 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7djhg" Mar 13 13:07:47.270547 master-0 kubenswrapper[28149]: I0313 13:07:47.270488 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-pwqm5" event={"ID":"a2e8c53d-29c4-4a63-b701-8103253c197a","Type":"ContainerStarted","Data":"cf2e91d8343303f8d7aeeb89b0625dc06975eb04beacb4a997e066fca54143f9"} Mar 13 13:07:47.271574 master-0 kubenswrapper[28149]: I0313 13:07:47.271541 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-pwqm5" Mar 13 13:07:47.287012 master-0 kubenswrapper[28149]: I0313 13:07:47.286965 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" event={"ID":"162f25e3-ac79-4df1-8615-579dcf5c111e","Type":"ContainerStarted","Data":"5ba514bf6d5240b305baae840f73af2a7911e711d12c5ce74c0828ce10efd333"} Mar 13 13:07:47.331943 master-0 kubenswrapper[28149]: I0313 13:07:47.328619 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-rm4bg" event={"ID":"70ec85c2-6c2a-44c1-b172-c851765912b5","Type":"ContainerStarted","Data":"ae100c0b59a792c2db2730252641ead3874136f7d973fd43b7ba336a999059fa"} Mar 13 13:07:47.331943 master-0 kubenswrapper[28149]: I0313 13:07:47.329595 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-rm4bg" Mar 13 13:07:47.359159 master-0 kubenswrapper[28149]: I0313 13:07:47.348572 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7djhg" podStartSLOduration=5.49064573 podStartE2EDuration="35.348547994s" podCreationTimestamp="2026-03-13 13:07:12 +0000 UTC" firstStartedPulling="2026-03-13 13:07:15.573415512 +0000 UTC m=+809.226880661" lastFinishedPulling="2026-03-13 13:07:45.431317766 +0000 UTC m=+839.084782925" observedRunningTime="2026-03-13 13:07:47.330639659 +0000 UTC m=+840.984104828" watchObservedRunningTime="2026-03-13 13:07:47.348547994 +0000 UTC m=+841.002013153" Mar 13 13:07:47.371174 master-0 kubenswrapper[28149]: I0313 13:07:47.365689 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-5z8g2" event={"ID":"b0caec54-e9db-4ace-8b0d-aebafbb6608b","Type":"ContainerStarted","Data":"b7c3ce66a41002ecec74e31adf91ae36b71dc0ebdb435e0759a939ea68d84e88"} Mar 13 13:07:47.371174 master-0 kubenswrapper[28149]: I0313 13:07:47.366241 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-5z8g2" Mar 13 13:07:47.373119 master-0 kubenswrapper[28149]: I0313 13:07:47.371775 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-4ns7k" event={"ID":"c0d9cf57-a057-4dd4-9d4c-d292fbcdc501","Type":"ContainerStarted","Data":"b9420db744a5dbb86cecdbc5e40b3b7e80c0f8940504a8ae7a767ea4bfd860e6"} Mar 13 13:07:47.373119 master-0 kubenswrapper[28149]: I0313 13:07:47.372977 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-4ns7k" Mar 13 13:07:47.391070 master-0 kubenswrapper[28149]: I0313 13:07:47.391015 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-b6gkp" event={"ID":"24d18dc3-f6a0-4b38-b027-1f92534d6201","Type":"ContainerStarted","Data":"2b23d9ba14ff902d728de36e7406793871a4af0724a37a10a64ea029ce5c9c3b"} Mar 13 13:07:47.391070 master-0 kubenswrapper[28149]: I0313 13:07:47.391077 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-b6gkp" Mar 13 13:07:47.400726 master-0 kubenswrapper[28149]: I0313 13:07:47.400640 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-rm4bg" podStartSLOduration=4.360040551 podStartE2EDuration="36.400613461s" podCreationTimestamp="2026-03-13 13:07:11 +0000 UTC" firstStartedPulling="2026-03-13 13:07:13.390166579 +0000 UTC m=+807.043631738" lastFinishedPulling="2026-03-13 13:07:45.430739489 +0000 UTC m=+839.084204648" observedRunningTime="2026-03-13 13:07:47.38537565 +0000 UTC m=+841.038840819" watchObservedRunningTime="2026-03-13 13:07:47.400613461 +0000 UTC m=+841.054078620" Mar 13 13:07:47.473195 master-0 kubenswrapper[28149]: I0313 13:07:47.467316 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-pwqm5" podStartSLOduration=5.103608539 podStartE2EDuration="36.467290022s" podCreationTimestamp="2026-03-13 13:07:11 +0000 UTC" firstStartedPulling="2026-03-13 13:07:14.068042993 +0000 UTC m=+807.721508152" lastFinishedPulling="2026-03-13 13:07:45.431724476 +0000 UTC m=+839.085189635" observedRunningTime="2026-03-13 13:07:47.45635845 +0000 UTC m=+841.109823609" watchObservedRunningTime="2026-03-13 13:07:47.467290022 +0000 UTC m=+841.120755181" Mar 13 13:07:47.505560 master-0 kubenswrapper[28149]: I0313 13:07:47.505473 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-5z8g2" podStartSLOduration=6.010031053 podStartE2EDuration="36.505454976s" podCreationTimestamp="2026-03-13 13:07:11 +0000 UTC" firstStartedPulling="2026-03-13 13:07:14.935967324 +0000 UTC m=+808.589432483" lastFinishedPulling="2026-03-13 13:07:45.431391247 +0000 UTC m=+839.084856406" observedRunningTime="2026-03-13 13:07:47.497118775 +0000 UTC m=+841.150583944" watchObservedRunningTime="2026-03-13 13:07:47.505454976 +0000 UTC m=+841.158920135" Mar 13 13:07:47.577488 master-0 kubenswrapper[28149]: I0313 13:07:47.574799 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-b6gkp" podStartSLOduration=4.67982671 podStartE2EDuration="36.57477779s" podCreationTimestamp="2026-03-13 13:07:11 +0000 UTC" firstStartedPulling="2026-03-13 13:07:13.537339152 +0000 UTC m=+807.190804311" lastFinishedPulling="2026-03-13 13:07:45.432290232 +0000 UTC m=+839.085755391" observedRunningTime="2026-03-13 13:07:47.544485603 +0000 UTC m=+841.197950762" watchObservedRunningTime="2026-03-13 13:07:47.57477779 +0000 UTC m=+841.228242959" Mar 13 13:07:47.606617 master-0 kubenswrapper[28149]: I0313 13:07:47.606526 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-4ns7k" podStartSLOduration=14.413299126 podStartE2EDuration="35.606499065s" podCreationTimestamp="2026-03-13 13:07:12 +0000 UTC" firstStartedPulling="2026-03-13 13:07:15.130439112 +0000 UTC m=+808.783904271" lastFinishedPulling="2026-03-13 13:07:36.323639051 +0000 UTC m=+829.977104210" observedRunningTime="2026-03-13 13:07:47.593680411 +0000 UTC m=+841.247145570" watchObservedRunningTime="2026-03-13 13:07:47.606499065 +0000 UTC m=+841.259964224" Mar 13 13:07:48.430170 master-0 kubenswrapper[28149]: I0313 13:07:48.430091 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" event={"ID":"c3b5a392-dca1-4bb6-b234-30d8eb87ae21","Type":"ContainerStarted","Data":"353fd85361c4d88278bb07f746a8214007bce318b19c25f98bc41afda155f463"} Mar 13 13:07:48.431043 master-0 kubenswrapper[28149]: I0313 13:07:48.431016 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" event={"ID":"c3b5a392-dca1-4bb6-b234-30d8eb87ae21","Type":"ContainerStarted","Data":"4ed097855252b3506d19e6d89c1ce0ffd44dc42ee839cab420e1745f66e28d33"} Mar 13 13:07:48.431227 master-0 kubenswrapper[28149]: I0313 13:07:48.431209 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:48.458232 master-0 kubenswrapper[28149]: I0313 13:07:48.458170 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-cjfg7" event={"ID":"3f835be3-b114-4593-af89-119b729df40a","Type":"ContainerStarted","Data":"e35e93bbbf1d0f55c9b69dc6c83109fc06a7f85cfa2fecdb5f7161fd3571714e"} Mar 13 13:07:48.459467 master-0 kubenswrapper[28149]: I0313 13:07:48.459439 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-cjfg7" Mar 13 13:07:48.478035 master-0 kubenswrapper[28149]: I0313 13:07:48.477988 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-qw5dr" event={"ID":"8418da33-bbf7-4930-8e12-07bc1172da01","Type":"ContainerStarted","Data":"740b29222aeca6668d6308a34cc82e502eb4a78250f027548fad843fb2afd85e"} Mar 13 13:07:48.478979 master-0 kubenswrapper[28149]: I0313 13:07:48.478962 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-qw5dr" Mar 13 13:07:48.483796 master-0 kubenswrapper[28149]: I0313 13:07:48.483750 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-df58d" event={"ID":"cc4c1517-f5c9-4e2e-9659-e1ad6ce7f4de","Type":"ContainerStarted","Data":"8ef0387331d6580782870da0e5f421fc68167d790b2898e067907b0dc1ace78b"} Mar 13 13:07:48.484944 master-0 kubenswrapper[28149]: I0313 13:07:48.484926 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-df58d" Mar 13 13:07:48.486543 master-0 kubenswrapper[28149]: I0313 13:07:48.486523 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-c55zw" event={"ID":"844a5475-8fda-433c-b083-26608607b8bb","Type":"ContainerStarted","Data":"17af160a868c6c80e5136ca7e00bebbade842c0a7481bf57aff0181846ea7de3"} Mar 13 13:07:48.487095 master-0 kubenswrapper[28149]: I0313 13:07:48.487077 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-c55zw" Mar 13 13:07:48.513517 master-0 kubenswrapper[28149]: I0313 13:07:48.513463 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-thm6q" event={"ID":"bee4fa71-7893-41d2-8512-5d26c6da9913","Type":"ContainerStarted","Data":"3cc2fbc92b81f69debc1d3d511e1b47946e916134cc1332054ddeadb48920282"} Mar 13 13:07:48.513859 master-0 kubenswrapper[28149]: I0313 13:07:48.513843 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-thm6q" Mar 13 13:07:48.524166 master-0 kubenswrapper[28149]: I0313 13:07:48.523171 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" podStartSLOduration=36.523130471 podStartE2EDuration="36.523130471s" podCreationTimestamp="2026-03-13 13:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:07:48.507776367 +0000 UTC m=+842.161241526" watchObservedRunningTime="2026-03-13 13:07:48.523130471 +0000 UTC m=+842.176595630" Mar 13 13:07:48.532170 master-0 kubenswrapper[28149]: I0313 13:07:48.527937 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-7nr7s" event={"ID":"cb86bcb9-ed8a-4046-99ed-8c9963f4af4d","Type":"ContainerStarted","Data":"dea210790bbec317feafd5e3a883458d516376bac3ae53715bf0325234ecb64d"} Mar 13 13:07:48.532170 master-0 kubenswrapper[28149]: I0313 13:07:48.528972 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-7nr7s" Mar 13 13:07:48.565175 master-0 kubenswrapper[28149]: I0313 13:07:48.561499 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-dnttw" event={"ID":"abc2aa99-ac15-433b-b478-711da24b8dbf","Type":"ContainerStarted","Data":"8376bdf39dc8b645dc4fc250dd4fbaababd97ab46d7eeadb80f800cde251d2c7"} Mar 13 13:07:48.565175 master-0 kubenswrapper[28149]: I0313 13:07:48.562562 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-dnttw" Mar 13 13:07:48.602169 master-0 kubenswrapper[28149]: I0313 13:07:48.594340 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-qw5dr" podStartSLOduration=6.009243946 podStartE2EDuration="36.594322236s" podCreationTimestamp="2026-03-13 13:07:12 +0000 UTC" firstStartedPulling="2026-03-13 13:07:14.905006758 +0000 UTC m=+808.558471917" lastFinishedPulling="2026-03-13 13:07:45.490085048 +0000 UTC m=+839.143550207" observedRunningTime="2026-03-13 13:07:48.590518651 +0000 UTC m=+842.243983810" watchObservedRunningTime="2026-03-13 13:07:48.594322236 +0000 UTC m=+842.247787395" Mar 13 13:07:48.602506 master-0 kubenswrapper[28149]: I0313 13:07:48.602433 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-zj78q" event={"ID":"9a01f1d0-3f33-41a0-be76-39ce52e88fab","Type":"ContainerStarted","Data":"7c0add37eb47cd9851fe0892ed4ea8fd9273ba85ff77419097587f9485557183"} Mar 13 13:07:48.606158 master-0 kubenswrapper[28149]: I0313 13:07:48.603257 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-zj78q" Mar 13 13:07:48.606378 master-0 kubenswrapper[28149]: I0313 13:07:48.606339 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2wcnq" event={"ID":"ce06c419-6c1e-4a1d-b8bc-e8f96a9195f6","Type":"ContainerStarted","Data":"88812ee0dffac3804efb67037fda939fe96420fbdbdf0263f2b8caf8ec5780f3"} Mar 13 13:07:48.677162 master-0 kubenswrapper[28149]: I0313 13:07:48.672846 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-df58d" podStartSLOduration=6.756364883 podStartE2EDuration="36.672823713s" podCreationTimestamp="2026-03-13 13:07:12 +0000 UTC" firstStartedPulling="2026-03-13 13:07:15.573632598 +0000 UTC m=+809.227097757" lastFinishedPulling="2026-03-13 13:07:45.490091428 +0000 UTC m=+839.143556587" observedRunningTime="2026-03-13 13:07:48.647663289 +0000 UTC m=+842.301128458" watchObservedRunningTime="2026-03-13 13:07:48.672823713 +0000 UTC m=+842.326288862" Mar 13 13:07:48.703931 master-0 kubenswrapper[28149]: I0313 13:07:48.703783 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-cjfg7" podStartSLOduration=6.502164058 podStartE2EDuration="37.703756357s" podCreationTimestamp="2026-03-13 13:07:11 +0000 UTC" firstStartedPulling="2026-03-13 13:07:14.229600213 +0000 UTC m=+807.883065372" lastFinishedPulling="2026-03-13 13:07:45.431192512 +0000 UTC m=+839.084657671" observedRunningTime="2026-03-13 13:07:48.683734745 +0000 UTC m=+842.337199914" watchObservedRunningTime="2026-03-13 13:07:48.703756357 +0000 UTC m=+842.357221516" Mar 13 13:07:48.768167 master-0 kubenswrapper[28149]: I0313 13:07:48.758072 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-c55zw" podStartSLOduration=6.900945554 podStartE2EDuration="36.758044996s" podCreationTimestamp="2026-03-13 13:07:12 +0000 UTC" firstStartedPulling="2026-03-13 13:07:15.573978257 +0000 UTC m=+809.227443416" lastFinishedPulling="2026-03-13 13:07:45.431077699 +0000 UTC m=+839.084542858" observedRunningTime="2026-03-13 13:07:48.731421801 +0000 UTC m=+842.384886960" watchObservedRunningTime="2026-03-13 13:07:48.758044996 +0000 UTC m=+842.411510155" Mar 13 13:07:48.817171 master-0 kubenswrapper[28149]: I0313 13:07:48.816450 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-thm6q" podStartSLOduration=6.466422562 podStartE2EDuration="37.816427507s" podCreationTimestamp="2026-03-13 13:07:11 +0000 UTC" firstStartedPulling="2026-03-13 13:07:14.14109644 +0000 UTC m=+807.794561599" lastFinishedPulling="2026-03-13 13:07:45.491101385 +0000 UTC m=+839.144566544" observedRunningTime="2026-03-13 13:07:48.806532594 +0000 UTC m=+842.459997743" watchObservedRunningTime="2026-03-13 13:07:48.816427507 +0000 UTC m=+842.469892666" Mar 13 13:07:48.889160 master-0 kubenswrapper[28149]: I0313 13:07:48.887317 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-dnttw" podStartSLOduration=6.52553527 podStartE2EDuration="36.887291784s" podCreationTimestamp="2026-03-13 13:07:12 +0000 UTC" firstStartedPulling="2026-03-13 13:07:15.150936048 +0000 UTC m=+808.804401207" lastFinishedPulling="2026-03-13 13:07:45.512692562 +0000 UTC m=+839.166157721" observedRunningTime="2026-03-13 13:07:48.8835471 +0000 UTC m=+842.537012259" watchObservedRunningTime="2026-03-13 13:07:48.887291784 +0000 UTC m=+842.540756943" Mar 13 13:07:48.924975 master-0 kubenswrapper[28149]: I0313 13:07:48.924888 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2wcnq" podStartSLOduration=6.941258036 podStartE2EDuration="36.924868941s" podCreationTimestamp="2026-03-13 13:07:12 +0000 UTC" firstStartedPulling="2026-03-13 13:07:15.573541055 +0000 UTC m=+809.227006214" lastFinishedPulling="2026-03-13 13:07:45.55715196 +0000 UTC m=+839.210617119" observedRunningTime="2026-03-13 13:07:48.920163941 +0000 UTC m=+842.573629100" watchObservedRunningTime="2026-03-13 13:07:48.924868941 +0000 UTC m=+842.578334100" Mar 13 13:07:48.968079 master-0 kubenswrapper[28149]: I0313 13:07:48.966342 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-7nr7s" podStartSLOduration=6.469637637 podStartE2EDuration="36.966314985s" podCreationTimestamp="2026-03-13 13:07:12 +0000 UTC" firstStartedPulling="2026-03-13 13:07:14.93474234 +0000 UTC m=+808.588207499" lastFinishedPulling="2026-03-13 13:07:45.431419688 +0000 UTC m=+839.084884847" observedRunningTime="2026-03-13 13:07:48.949546202 +0000 UTC m=+842.603011381" watchObservedRunningTime="2026-03-13 13:07:48.966314985 +0000 UTC m=+842.619780144" Mar 13 13:07:49.002320 master-0 kubenswrapper[28149]: I0313 13:07:49.002240 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-677c674df7-zj78q" podStartSLOduration=6.659501158 podStartE2EDuration="37.002220916s" podCreationTimestamp="2026-03-13 13:07:12 +0000 UTC" firstStartedPulling="2026-03-13 13:07:15.148791909 +0000 UTC m=+808.802257078" lastFinishedPulling="2026-03-13 13:07:45.491511687 +0000 UTC m=+839.144976836" observedRunningTime="2026-03-13 13:07:48.993918487 +0000 UTC m=+842.647383646" watchObservedRunningTime="2026-03-13 13:07:49.002220916 +0000 UTC m=+842.655686075" Mar 13 13:07:52.016325 master-0 kubenswrapper[28149]: I0313 13:07:52.016259 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-bql4k" Mar 13 13:07:52.097682 master-0 kubenswrapper[28149]: I0313 13:07:52.097644 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-b6gkp" Mar 13 13:07:52.117305 master-0 kubenswrapper[28149]: I0313 13:07:52.117250 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-49xpx" Mar 13 13:07:52.248213 master-0 kubenswrapper[28149]: I0313 13:07:52.247519 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-pdfdc" Mar 13 13:07:52.323021 master-0 kubenswrapper[28149]: I0313 13:07:52.322904 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-pwqm5" Mar 13 13:07:52.430463 master-0 kubenswrapper[28149]: I0313 13:07:52.430412 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-rm4bg" Mar 13 13:07:52.683437 master-0 kubenswrapper[28149]: I0313 13:07:52.683046 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-4qdjn" Mar 13 13:07:52.805259 master-0 kubenswrapper[28149]: I0313 13:07:52.805215 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-thm6q" Mar 13 13:07:52.919167 master-0 kubenswrapper[28149]: I0313 13:07:52.918455 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-g962k" Mar 13 13:07:52.947995 master-0 kubenswrapper[28149]: I0313 13:07:52.947878 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-cjfg7" Mar 13 13:07:52.961073 master-0 kubenswrapper[28149]: I0313 13:07:52.961025 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-4ns7k" Mar 13 13:07:53.102681 master-0 kubenswrapper[28149]: I0313 13:07:53.101296 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-qw5dr" Mar 13 13:07:53.185774 master-0 kubenswrapper[28149]: I0313 13:07:53.185730 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-dnttw" Mar 13 13:07:53.223425 master-0 kubenswrapper[28149]: I0313 13:07:53.223313 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-5z8g2" Mar 13 13:07:53.229349 master-0 kubenswrapper[28149]: I0313 13:07:53.229259 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-7nr7s" Mar 13 13:07:53.299130 master-0 kubenswrapper[28149]: I0313 13:07:53.297307 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-zj78q" Mar 13 13:07:53.401865 master-0 kubenswrapper[28149]: I0313 13:07:53.401720 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7djhg" Mar 13 13:07:53.403377 master-0 kubenswrapper[28149]: I0313 13:07:53.403341 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-df58d" Mar 13 13:07:53.515904 master-0 kubenswrapper[28149]: I0313 13:07:53.515752 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-c55zw" Mar 13 13:07:55.592207 master-0 kubenswrapper[28149]: I0313 13:07:55.591594 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-gtljk" Mar 13 13:07:56.021537 master-0 kubenswrapper[28149]: I0313 13:07:56.021496 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" event={"ID":"57e83807-c598-4f45-b92a-e017a07b6997","Type":"ContainerStarted","Data":"c1323df4ae3e6308ccbab9c009b09b9ac545ae5cbd49a485a25640845bf36447"} Mar 13 13:07:56.022798 master-0 kubenswrapper[28149]: I0313 13:07:56.022779 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:07:56.024555 master-0 kubenswrapper[28149]: I0313 13:07:56.024532 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" event={"ID":"162f25e3-ac79-4df1-8615-579dcf5c111e","Type":"ContainerStarted","Data":"fa4581ef2f353e5c2c02387017764f8bb4dfb46b624170d70426edf73f0fcefe"} Mar 13 13:07:56.024735 master-0 kubenswrapper[28149]: I0313 13:07:56.024717 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:07:56.052263 master-0 kubenswrapper[28149]: I0313 13:07:56.052177 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" podStartSLOduration=35.86120375 podStartE2EDuration="45.052152443s" podCreationTimestamp="2026-03-13 13:07:11 +0000 UTC" firstStartedPulling="2026-03-13 13:07:46.303612807 +0000 UTC m=+839.957077966" lastFinishedPulling="2026-03-13 13:07:55.4945615 +0000 UTC m=+849.148026659" observedRunningTime="2026-03-13 13:07:56.05059241 +0000 UTC m=+849.704057589" watchObservedRunningTime="2026-03-13 13:07:56.052152443 +0000 UTC m=+849.705617612" Mar 13 13:07:56.095033 master-0 kubenswrapper[28149]: I0313 13:07:56.094930 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" podStartSLOduration=34.817104142 podStartE2EDuration="44.094902004s" podCreationTimestamp="2026-03-13 13:07:12 +0000 UTC" firstStartedPulling="2026-03-13 13:07:46.217699334 +0000 UTC m=+839.871164493" lastFinishedPulling="2026-03-13 13:07:55.495497206 +0000 UTC m=+849.148962355" observedRunningTime="2026-03-13 13:07:56.09007345 +0000 UTC m=+849.743538619" watchObservedRunningTime="2026-03-13 13:07:56.094902004 +0000 UTC m=+849.748367163" Mar 13 13:08:04.742864 master-0 kubenswrapper[28149]: I0313 13:08:04.742810 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" Mar 13 13:08:04.927859 master-0 kubenswrapper[28149]: I0313 13:08:04.926735 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6gkdt" Mar 13 13:08:48.827178 master-0 kubenswrapper[28149]: I0313 13:08:48.820322 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-q6j75"] Mar 13 13:08:48.827178 master-0 kubenswrapper[28149]: I0313 13:08:48.822753 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-q6j75" Mar 13 13:08:48.827178 master-0 kubenswrapper[28149]: I0313 13:08:48.826112 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 13 13:08:48.828110 master-0 kubenswrapper[28149]: I0313 13:08:48.827928 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 13 13:08:48.828110 master-0 kubenswrapper[28149]: I0313 13:08:48.828083 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 13 13:08:48.847165 master-0 kubenswrapper[28149]: I0313 13:08:48.845095 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f65c5\" (UniqueName: \"kubernetes.io/projected/2c02a249-c9cd-4660-b69b-cf03f864c992-kube-api-access-f65c5\") pod \"dnsmasq-dns-685c76cf85-q6j75\" (UID: \"2c02a249-c9cd-4660-b69b-cf03f864c992\") " pod="openstack/dnsmasq-dns-685c76cf85-q6j75" Mar 13 13:08:48.847165 master-0 kubenswrapper[28149]: I0313 13:08:48.845236 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c02a249-c9cd-4660-b69b-cf03f864c992-config\") pod \"dnsmasq-dns-685c76cf85-q6j75\" (UID: \"2c02a249-c9cd-4660-b69b-cf03f864c992\") " pod="openstack/dnsmasq-dns-685c76cf85-q6j75" Mar 13 13:08:48.870168 master-0 kubenswrapper[28149]: I0313 13:08:48.864177 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-q6j75"] Mar 13 13:08:48.930980 master-0 kubenswrapper[28149]: I0313 13:08:48.930920 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-ljn2q"] Mar 13 13:08:48.933128 master-0 kubenswrapper[28149]: I0313 13:08:48.933109 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" Mar 13 13:08:48.936288 master-0 kubenswrapper[28149]: I0313 13:08:48.935639 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 13 13:08:48.941111 master-0 kubenswrapper[28149]: I0313 13:08:48.941040 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-ljn2q"] Mar 13 13:08:48.951523 master-0 kubenswrapper[28149]: I0313 13:08:48.950174 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c04eff7d-dbbf-4174-8d0e-71046963aca5-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-ljn2q\" (UID: \"c04eff7d-dbbf-4174-8d0e-71046963aca5\") " pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" Mar 13 13:08:48.951523 master-0 kubenswrapper[28149]: I0313 13:08:48.950326 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c04eff7d-dbbf-4174-8d0e-71046963aca5-config\") pod \"dnsmasq-dns-8476fd89bc-ljn2q\" (UID: \"c04eff7d-dbbf-4174-8d0e-71046963aca5\") " pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" Mar 13 13:08:48.951523 master-0 kubenswrapper[28149]: I0313 13:08:48.950416 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f65c5\" (UniqueName: \"kubernetes.io/projected/2c02a249-c9cd-4660-b69b-cf03f864c992-kube-api-access-f65c5\") pod \"dnsmasq-dns-685c76cf85-q6j75\" (UID: \"2c02a249-c9cd-4660-b69b-cf03f864c992\") " pod="openstack/dnsmasq-dns-685c76cf85-q6j75" Mar 13 13:08:48.951523 master-0 kubenswrapper[28149]: I0313 13:08:48.950450 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwpbm\" (UniqueName: \"kubernetes.io/projected/c04eff7d-dbbf-4174-8d0e-71046963aca5-kube-api-access-mwpbm\") pod \"dnsmasq-dns-8476fd89bc-ljn2q\" (UID: \"c04eff7d-dbbf-4174-8d0e-71046963aca5\") " pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" Mar 13 13:08:48.951523 master-0 kubenswrapper[28149]: I0313 13:08:48.950476 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c02a249-c9cd-4660-b69b-cf03f864c992-config\") pod \"dnsmasq-dns-685c76cf85-q6j75\" (UID: \"2c02a249-c9cd-4660-b69b-cf03f864c992\") " pod="openstack/dnsmasq-dns-685c76cf85-q6j75" Mar 13 13:08:48.951883 master-0 kubenswrapper[28149]: I0313 13:08:48.951712 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c02a249-c9cd-4660-b69b-cf03f864c992-config\") pod \"dnsmasq-dns-685c76cf85-q6j75\" (UID: \"2c02a249-c9cd-4660-b69b-cf03f864c992\") " pod="openstack/dnsmasq-dns-685c76cf85-q6j75" Mar 13 13:08:48.982477 master-0 kubenswrapper[28149]: I0313 13:08:48.981076 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f65c5\" (UniqueName: \"kubernetes.io/projected/2c02a249-c9cd-4660-b69b-cf03f864c992-kube-api-access-f65c5\") pod \"dnsmasq-dns-685c76cf85-q6j75\" (UID: \"2c02a249-c9cd-4660-b69b-cf03f864c992\") " pod="openstack/dnsmasq-dns-685c76cf85-q6j75" Mar 13 13:08:49.052446 master-0 kubenswrapper[28149]: I0313 13:08:49.052361 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c04eff7d-dbbf-4174-8d0e-71046963aca5-config\") pod \"dnsmasq-dns-8476fd89bc-ljn2q\" (UID: \"c04eff7d-dbbf-4174-8d0e-71046963aca5\") " pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" Mar 13 13:08:49.052700 master-0 kubenswrapper[28149]: I0313 13:08:49.052489 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwpbm\" (UniqueName: \"kubernetes.io/projected/c04eff7d-dbbf-4174-8d0e-71046963aca5-kube-api-access-mwpbm\") pod \"dnsmasq-dns-8476fd89bc-ljn2q\" (UID: \"c04eff7d-dbbf-4174-8d0e-71046963aca5\") " pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" Mar 13 13:08:49.052700 master-0 kubenswrapper[28149]: I0313 13:08:49.052555 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c04eff7d-dbbf-4174-8d0e-71046963aca5-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-ljn2q\" (UID: \"c04eff7d-dbbf-4174-8d0e-71046963aca5\") " pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" Mar 13 13:08:49.053700 master-0 kubenswrapper[28149]: I0313 13:08:49.053669 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c04eff7d-dbbf-4174-8d0e-71046963aca5-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-ljn2q\" (UID: \"c04eff7d-dbbf-4174-8d0e-71046963aca5\") " pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" Mar 13 13:08:49.054176 master-0 kubenswrapper[28149]: I0313 13:08:49.054118 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c04eff7d-dbbf-4174-8d0e-71046963aca5-config\") pod \"dnsmasq-dns-8476fd89bc-ljn2q\" (UID: \"c04eff7d-dbbf-4174-8d0e-71046963aca5\") " pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" Mar 13 13:08:49.071318 master-0 kubenswrapper[28149]: I0313 13:08:49.071245 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwpbm\" (UniqueName: \"kubernetes.io/projected/c04eff7d-dbbf-4174-8d0e-71046963aca5-kube-api-access-mwpbm\") pod \"dnsmasq-dns-8476fd89bc-ljn2q\" (UID: \"c04eff7d-dbbf-4174-8d0e-71046963aca5\") " pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" Mar 13 13:08:49.179100 master-0 kubenswrapper[28149]: I0313 13:08:49.178933 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-q6j75" Mar 13 13:08:49.254261 master-0 kubenswrapper[28149]: I0313 13:08:49.253884 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" Mar 13 13:08:49.777192 master-0 kubenswrapper[28149]: I0313 13:08:49.771248 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-q6j75"] Mar 13 13:08:49.794247 master-0 kubenswrapper[28149]: I0313 13:08:49.790813 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-7z4lg"] Mar 13 13:08:49.803167 master-0 kubenswrapper[28149]: I0313 13:08:49.801915 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" Mar 13 13:08:49.971679 master-0 kubenswrapper[28149]: I0313 13:08:49.971615 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-7z4lg"] Mar 13 13:08:50.078876 master-0 kubenswrapper[28149]: I0313 13:08:50.069674 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-q6j75"] Mar 13 13:08:50.078876 master-0 kubenswrapper[28149]: I0313 13:08:50.070671 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6cwt\" (UniqueName: \"kubernetes.io/projected/52e53cd9-c831-4aa8-ae1c-5912efb14c13-kube-api-access-g6cwt\") pod \"dnsmasq-dns-586dbdbb8c-7z4lg\" (UID: \"52e53cd9-c831-4aa8-ae1c-5912efb14c13\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" Mar 13 13:08:50.078876 master-0 kubenswrapper[28149]: I0313 13:08:50.070788 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52e53cd9-c831-4aa8-ae1c-5912efb14c13-config\") pod \"dnsmasq-dns-586dbdbb8c-7z4lg\" (UID: \"52e53cd9-c831-4aa8-ae1c-5912efb14c13\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" Mar 13 13:08:50.078876 master-0 kubenswrapper[28149]: I0313 13:08:50.070819 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52e53cd9-c831-4aa8-ae1c-5912efb14c13-dns-svc\") pod \"dnsmasq-dns-586dbdbb8c-7z4lg\" (UID: \"52e53cd9-c831-4aa8-ae1c-5912efb14c13\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" Mar 13 13:08:50.084395 master-0 kubenswrapper[28149]: W0313 13:08:50.081932 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c02a249_c9cd_4660_b69b_cf03f864c992.slice/crio-004a3f69de1691c71f4bf8470fbe3704b02e780ab5fd9e3278638adfc59b19a7 WatchSource:0}: Error finding container 004a3f69de1691c71f4bf8470fbe3704b02e780ab5fd9e3278638adfc59b19a7: Status 404 returned error can't find the container with id 004a3f69de1691c71f4bf8470fbe3704b02e780ab5fd9e3278638adfc59b19a7 Mar 13 13:08:50.175183 master-0 kubenswrapper[28149]: I0313 13:08:50.174610 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6cwt\" (UniqueName: \"kubernetes.io/projected/52e53cd9-c831-4aa8-ae1c-5912efb14c13-kube-api-access-g6cwt\") pod \"dnsmasq-dns-586dbdbb8c-7z4lg\" (UID: \"52e53cd9-c831-4aa8-ae1c-5912efb14c13\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" Mar 13 13:08:50.175183 master-0 kubenswrapper[28149]: I0313 13:08:50.174745 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52e53cd9-c831-4aa8-ae1c-5912efb14c13-config\") pod \"dnsmasq-dns-586dbdbb8c-7z4lg\" (UID: \"52e53cd9-c831-4aa8-ae1c-5912efb14c13\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" Mar 13 13:08:50.175183 master-0 kubenswrapper[28149]: I0313 13:08:50.174787 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52e53cd9-c831-4aa8-ae1c-5912efb14c13-dns-svc\") pod \"dnsmasq-dns-586dbdbb8c-7z4lg\" (UID: \"52e53cd9-c831-4aa8-ae1c-5912efb14c13\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" Mar 13 13:08:50.180161 master-0 kubenswrapper[28149]: I0313 13:08:50.175652 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52e53cd9-c831-4aa8-ae1c-5912efb14c13-dns-svc\") pod \"dnsmasq-dns-586dbdbb8c-7z4lg\" (UID: \"52e53cd9-c831-4aa8-ae1c-5912efb14c13\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" Mar 13 13:08:50.180161 master-0 kubenswrapper[28149]: I0313 13:08:50.177339 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52e53cd9-c831-4aa8-ae1c-5912efb14c13-config\") pod \"dnsmasq-dns-586dbdbb8c-7z4lg\" (UID: \"52e53cd9-c831-4aa8-ae1c-5912efb14c13\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" Mar 13 13:08:50.251393 master-0 kubenswrapper[28149]: I0313 13:08:50.250747 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6cwt\" (UniqueName: \"kubernetes.io/projected/52e53cd9-c831-4aa8-ae1c-5912efb14c13-kube-api-access-g6cwt\") pod \"dnsmasq-dns-586dbdbb8c-7z4lg\" (UID: \"52e53cd9-c831-4aa8-ae1c-5912efb14c13\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" Mar 13 13:08:50.473023 master-0 kubenswrapper[28149]: I0313 13:08:50.472246 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" Mar 13 13:08:50.532232 master-0 kubenswrapper[28149]: I0313 13:08:50.532124 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-ljn2q"] Mar 13 13:08:50.619268 master-0 kubenswrapper[28149]: I0313 13:08:50.619197 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-ljn2q"] Mar 13 13:08:50.636631 master-0 kubenswrapper[28149]: I0313 13:08:50.636453 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-gztb5"] Mar 13 13:08:50.640176 master-0 kubenswrapper[28149]: I0313 13:08:50.639801 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" Mar 13 13:08:50.655169 master-0 kubenswrapper[28149]: I0313 13:08:50.654366 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-gztb5"] Mar 13 13:08:50.865871 master-0 kubenswrapper[28149]: I0313 13:08:50.854189 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbdgf\" (UniqueName: \"kubernetes.io/projected/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-kube-api-access-sbdgf\") pod \"dnsmasq-dns-6ff8fd9d5c-gztb5\" (UID: \"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" Mar 13 13:08:50.865871 master-0 kubenswrapper[28149]: I0313 13:08:50.854277 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-config\") pod \"dnsmasq-dns-6ff8fd9d5c-gztb5\" (UID: \"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" Mar 13 13:08:50.865871 master-0 kubenswrapper[28149]: I0313 13:08:50.854338 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-gztb5\" (UID: \"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" Mar 13 13:08:50.956406 master-0 kubenswrapper[28149]: I0313 13:08:50.956341 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbdgf\" (UniqueName: \"kubernetes.io/projected/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-kube-api-access-sbdgf\") pod \"dnsmasq-dns-6ff8fd9d5c-gztb5\" (UID: \"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" Mar 13 13:08:50.956406 master-0 kubenswrapper[28149]: I0313 13:08:50.956389 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-config\") pod \"dnsmasq-dns-6ff8fd9d5c-gztb5\" (UID: \"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" Mar 13 13:08:50.956809 master-0 kubenswrapper[28149]: I0313 13:08:50.956435 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-gztb5\" (UID: \"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" Mar 13 13:08:50.960207 master-0 kubenswrapper[28149]: I0313 13:08:50.958285 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-config\") pod \"dnsmasq-dns-6ff8fd9d5c-gztb5\" (UID: \"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" Mar 13 13:08:50.960207 master-0 kubenswrapper[28149]: I0313 13:08:50.958602 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-gztb5\" (UID: \"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" Mar 13 13:08:50.978808 master-0 kubenswrapper[28149]: I0313 13:08:50.978291 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbdgf\" (UniqueName: \"kubernetes.io/projected/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-kube-api-access-sbdgf\") pod \"dnsmasq-dns-6ff8fd9d5c-gztb5\" (UID: \"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" Mar 13 13:08:51.057329 master-0 kubenswrapper[28149]: I0313 13:08:51.056091 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-q6j75" event={"ID":"2c02a249-c9cd-4660-b69b-cf03f864c992","Type":"ContainerStarted","Data":"004a3f69de1691c71f4bf8470fbe3704b02e780ab5fd9e3278638adfc59b19a7"} Mar 13 13:08:51.060469 master-0 kubenswrapper[28149]: I0313 13:08:51.059232 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" event={"ID":"c04eff7d-dbbf-4174-8d0e-71046963aca5","Type":"ContainerStarted","Data":"3eebabf895fd77da6136e3c4bf0f40ccd20db0ebce526227ae68b0885046e203"} Mar 13 13:08:51.242212 master-0 kubenswrapper[28149]: W0313 13:08:51.241372 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52e53cd9_c831_4aa8_ae1c_5912efb14c13.slice/crio-13be907302fd34469bb21369545575fcbc5cddc79bb44a9e67486c6042872824 WatchSource:0}: Error finding container 13be907302fd34469bb21369545575fcbc5cddc79bb44a9e67486c6042872824: Status 404 returned error can't find the container with id 13be907302fd34469bb21369545575fcbc5cddc79bb44a9e67486c6042872824 Mar 13 13:08:51.244769 master-0 kubenswrapper[28149]: I0313 13:08:51.242770 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-7z4lg"] Mar 13 13:08:51.277057 master-0 kubenswrapper[28149]: I0313 13:08:51.275399 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" Mar 13 13:08:52.133165 master-0 kubenswrapper[28149]: I0313 13:08:52.126622 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" event={"ID":"52e53cd9-c831-4aa8-ae1c-5912efb14c13","Type":"ContainerStarted","Data":"13be907302fd34469bb21369545575fcbc5cddc79bb44a9e67486c6042872824"} Mar 13 13:08:52.152780 master-0 kubenswrapper[28149]: I0313 13:08:52.152732 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-gztb5"] Mar 13 13:08:53.149516 master-0 kubenswrapper[28149]: I0313 13:08:53.146522 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" event={"ID":"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01","Type":"ContainerStarted","Data":"997cea4faaba4e11c64cbc647f79005f14481b206f8a7f202867c6bf5a6d3d8d"} Mar 13 13:08:53.972918 master-0 kubenswrapper[28149]: I0313 13:08:53.972760 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 13 13:08:53.977095 master-0 kubenswrapper[28149]: I0313 13:08:53.977059 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 13 13:08:54.005704 master-0 kubenswrapper[28149]: I0313 13:08:54.005337 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 13 13:08:54.005704 master-0 kubenswrapper[28149]: I0313 13:08:54.005378 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 13 13:08:54.011145 master-0 kubenswrapper[28149]: I0313 13:08:54.011085 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 13 13:08:54.032103 master-0 kubenswrapper[28149]: I0313 13:08:54.032034 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxfn8\" (UniqueName: \"kubernetes.io/projected/309b8d37-1bcd-4f23-946e-d3eb23e5d072-kube-api-access-fxfn8\") pod \"memcached-0\" (UID: \"309b8d37-1bcd-4f23-946e-d3eb23e5d072\") " pod="openstack/memcached-0" Mar 13 13:08:54.032103 master-0 kubenswrapper[28149]: I0313 13:08:54.032105 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/309b8d37-1bcd-4f23-946e-d3eb23e5d072-memcached-tls-certs\") pod \"memcached-0\" (UID: \"309b8d37-1bcd-4f23-946e-d3eb23e5d072\") " pod="openstack/memcached-0" Mar 13 13:08:54.032342 master-0 kubenswrapper[28149]: I0313 13:08:54.032188 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/309b8d37-1bcd-4f23-946e-d3eb23e5d072-config-data\") pod \"memcached-0\" (UID: \"309b8d37-1bcd-4f23-946e-d3eb23e5d072\") " pod="openstack/memcached-0" Mar 13 13:08:54.032566 master-0 kubenswrapper[28149]: I0313 13:08:54.032517 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/309b8d37-1bcd-4f23-946e-d3eb23e5d072-combined-ca-bundle\") pod \"memcached-0\" (UID: \"309b8d37-1bcd-4f23-946e-d3eb23e5d072\") " pod="openstack/memcached-0" Mar 13 13:08:54.032629 master-0 kubenswrapper[28149]: I0313 13:08:54.032573 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/309b8d37-1bcd-4f23-946e-d3eb23e5d072-kolla-config\") pod \"memcached-0\" (UID: \"309b8d37-1bcd-4f23-946e-d3eb23e5d072\") " pod="openstack/memcached-0" Mar 13 13:08:54.150832 master-0 kubenswrapper[28149]: I0313 13:08:54.149510 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/309b8d37-1bcd-4f23-946e-d3eb23e5d072-combined-ca-bundle\") pod \"memcached-0\" (UID: \"309b8d37-1bcd-4f23-946e-d3eb23e5d072\") " pod="openstack/memcached-0" Mar 13 13:08:54.150832 master-0 kubenswrapper[28149]: I0313 13:08:54.149564 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/309b8d37-1bcd-4f23-946e-d3eb23e5d072-kolla-config\") pod \"memcached-0\" (UID: \"309b8d37-1bcd-4f23-946e-d3eb23e5d072\") " pod="openstack/memcached-0" Mar 13 13:08:54.150832 master-0 kubenswrapper[28149]: I0313 13:08:54.149645 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxfn8\" (UniqueName: \"kubernetes.io/projected/309b8d37-1bcd-4f23-946e-d3eb23e5d072-kube-api-access-fxfn8\") pod \"memcached-0\" (UID: \"309b8d37-1bcd-4f23-946e-d3eb23e5d072\") " pod="openstack/memcached-0" Mar 13 13:08:54.150832 master-0 kubenswrapper[28149]: I0313 13:08:54.149667 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/309b8d37-1bcd-4f23-946e-d3eb23e5d072-memcached-tls-certs\") pod \"memcached-0\" (UID: \"309b8d37-1bcd-4f23-946e-d3eb23e5d072\") " pod="openstack/memcached-0" Mar 13 13:08:54.150832 master-0 kubenswrapper[28149]: I0313 13:08:54.149702 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/309b8d37-1bcd-4f23-946e-d3eb23e5d072-config-data\") pod \"memcached-0\" (UID: \"309b8d37-1bcd-4f23-946e-d3eb23e5d072\") " pod="openstack/memcached-0" Mar 13 13:08:54.150832 master-0 kubenswrapper[28149]: I0313 13:08:54.150455 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/309b8d37-1bcd-4f23-946e-d3eb23e5d072-config-data\") pod \"memcached-0\" (UID: \"309b8d37-1bcd-4f23-946e-d3eb23e5d072\") " pod="openstack/memcached-0" Mar 13 13:08:54.160257 master-0 kubenswrapper[28149]: I0313 13:08:54.159105 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/309b8d37-1bcd-4f23-946e-d3eb23e5d072-kolla-config\") pod \"memcached-0\" (UID: \"309b8d37-1bcd-4f23-946e-d3eb23e5d072\") " pod="openstack/memcached-0" Mar 13 13:08:54.160257 master-0 kubenswrapper[28149]: I0313 13:08:54.159464 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/309b8d37-1bcd-4f23-946e-d3eb23e5d072-combined-ca-bundle\") pod \"memcached-0\" (UID: \"309b8d37-1bcd-4f23-946e-d3eb23e5d072\") " pod="openstack/memcached-0" Mar 13 13:08:54.179868 master-0 kubenswrapper[28149]: I0313 13:08:54.179810 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/309b8d37-1bcd-4f23-946e-d3eb23e5d072-memcached-tls-certs\") pod \"memcached-0\" (UID: \"309b8d37-1bcd-4f23-946e-d3eb23e5d072\") " pod="openstack/memcached-0" Mar 13 13:08:54.186884 master-0 kubenswrapper[28149]: I0313 13:08:54.186808 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxfn8\" (UniqueName: \"kubernetes.io/projected/309b8d37-1bcd-4f23-946e-d3eb23e5d072-kube-api-access-fxfn8\") pod \"memcached-0\" (UID: \"309b8d37-1bcd-4f23-946e-d3eb23e5d072\") " pod="openstack/memcached-0" Mar 13 13:08:54.202969 master-0 kubenswrapper[28149]: I0313 13:08:54.202899 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 13 13:08:54.290607 master-0 kubenswrapper[28149]: I0313 13:08:54.288072 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 13:08:54.440696 master-0 kubenswrapper[28149]: I0313 13:08:54.440557 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 13 13:08:54.473883 master-0 kubenswrapper[28149]: I0313 13:08:54.473825 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.480991 master-0 kubenswrapper[28149]: I0313 13:08:54.480937 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 13 13:08:54.480991 master-0 kubenswrapper[28149]: I0313 13:08:54.480961 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 13 13:08:54.481366 master-0 kubenswrapper[28149]: I0313 13:08:54.481295 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 13 13:08:54.482991 master-0 kubenswrapper[28149]: I0313 13:08:54.482940 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 13 13:08:54.483179 master-0 kubenswrapper[28149]: I0313 13:08:54.483156 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 13 13:08:54.484008 master-0 kubenswrapper[28149]: I0313 13:08:54.483989 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 13 13:08:54.620090 master-0 kubenswrapper[28149]: I0313 13:08:54.619990 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 13:08:54.667191 master-0 kubenswrapper[28149]: I0313 13:08:54.653845 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f417d425-c062-40de-a92b-17afe412cfe9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.667191 master-0 kubenswrapper[28149]: I0313 13:08:54.653896 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f417d425-c062-40de-a92b-17afe412cfe9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.667191 master-0 kubenswrapper[28149]: I0313 13:08:54.653925 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f417d425-c062-40de-a92b-17afe412cfe9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.667191 master-0 kubenswrapper[28149]: I0313 13:08:54.654004 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f417d425-c062-40de-a92b-17afe412cfe9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.667191 master-0 kubenswrapper[28149]: I0313 13:08:54.654082 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f417d425-c062-40de-a92b-17afe412cfe9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.667191 master-0 kubenswrapper[28149]: I0313 13:08:54.654120 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f417d425-c062-40de-a92b-17afe412cfe9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.667191 master-0 kubenswrapper[28149]: I0313 13:08:54.654182 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f417d425-c062-40de-a92b-17afe412cfe9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.667191 master-0 kubenswrapper[28149]: I0313 13:08:54.654314 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f417d425-c062-40de-a92b-17afe412cfe9-config-data\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.667191 master-0 kubenswrapper[28149]: I0313 13:08:54.654364 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f417d425-c062-40de-a92b-17afe412cfe9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.667191 master-0 kubenswrapper[28149]: I0313 13:08:54.654422 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b63801bf-5389-40b0-8051-e87420f2b59e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^56e579f0-350c-4477-9d42-ad4694ba1e68\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.667191 master-0 kubenswrapper[28149]: I0313 13:08:54.654505 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk2cg\" (UniqueName: \"kubernetes.io/projected/f417d425-c062-40de-a92b-17afe412cfe9-kube-api-access-xk2cg\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.756654 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b63801bf-5389-40b0-8051-e87420f2b59e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^56e579f0-350c-4477-9d42-ad4694ba1e68\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.756743 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk2cg\" (UniqueName: \"kubernetes.io/projected/f417d425-c062-40de-a92b-17afe412cfe9-kube-api-access-xk2cg\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.756781 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f417d425-c062-40de-a92b-17afe412cfe9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.757003 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f417d425-c062-40de-a92b-17afe412cfe9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.757092 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f417d425-c062-40de-a92b-17afe412cfe9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.757222 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f417d425-c062-40de-a92b-17afe412cfe9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.757312 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f417d425-c062-40de-a92b-17afe412cfe9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.757344 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f417d425-c062-40de-a92b-17afe412cfe9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.757918 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f417d425-c062-40de-a92b-17afe412cfe9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.766854 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f417d425-c062-40de-a92b-17afe412cfe9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.770875 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f417d425-c062-40de-a92b-17afe412cfe9-config-data\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.770991 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f417d425-c062-40de-a92b-17afe412cfe9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.771087 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f417d425-c062-40de-a92b-17afe412cfe9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.771281 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f417d425-c062-40de-a92b-17afe412cfe9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.771703 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f417d425-c062-40de-a92b-17afe412cfe9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.778158 master-0 kubenswrapper[28149]: I0313 13:08:54.772356 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f417d425-c062-40de-a92b-17afe412cfe9-config-data\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.786688 master-0 kubenswrapper[28149]: I0313 13:08:54.783164 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f417d425-c062-40de-a92b-17afe412cfe9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.786688 master-0 kubenswrapper[28149]: I0313 13:08:54.786661 28149 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 13:08:54.786935 master-0 kubenswrapper[28149]: I0313 13:08:54.786701 28149 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b63801bf-5389-40b0-8051-e87420f2b59e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^56e579f0-350c-4477-9d42-ad4694ba1e68\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/f4fc8c36a376894812dc8c743aa166b0f8d21e9889b41d39d2c1749608640d35/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.788806 master-0 kubenswrapper[28149]: I0313 13:08:54.788766 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f417d425-c062-40de-a92b-17afe412cfe9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.812936 master-0 kubenswrapper[28149]: I0313 13:08:54.812799 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f417d425-c062-40de-a92b-17afe412cfe9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.817106 master-0 kubenswrapper[28149]: I0313 13:08:54.817056 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk2cg\" (UniqueName: \"kubernetes.io/projected/f417d425-c062-40de-a92b-17afe412cfe9-kube-api-access-xk2cg\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.822180 master-0 kubenswrapper[28149]: I0313 13:08:54.822010 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f417d425-c062-40de-a92b-17afe412cfe9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:54.891266 master-0 kubenswrapper[28149]: I0313 13:08:54.885843 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 13:08:54.906121 master-0 kubenswrapper[28149]: I0313 13:08:54.902177 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 13:08:54.906121 master-0 kubenswrapper[28149]: I0313 13:08:54.902283 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:54.915555 master-0 kubenswrapper[28149]: I0313 13:08:54.912633 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 13 13:08:54.915555 master-0 kubenswrapper[28149]: I0313 13:08:54.912675 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 13 13:08:54.915555 master-0 kubenswrapper[28149]: I0313 13:08:54.912658 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 13 13:08:54.915555 master-0 kubenswrapper[28149]: I0313 13:08:54.912804 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 13 13:08:54.915555 master-0 kubenswrapper[28149]: I0313 13:08:54.913216 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 13 13:08:54.919188 master-0 kubenswrapper[28149]: I0313 13:08:54.919094 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 13 13:08:54.990428 master-0 kubenswrapper[28149]: I0313 13:08:54.989746 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/668a51dd-c5b3-4531-b707-39a00bfb5eef-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:54.990428 master-0 kubenswrapper[28149]: I0313 13:08:54.989808 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9c5228e8-1535-4587-b617-a49397fb64c7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^49b0d5ff-a162-4bad-94c9-4d87be591b35\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:54.990428 master-0 kubenswrapper[28149]: I0313 13:08:54.989850 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/668a51dd-c5b3-4531-b707-39a00bfb5eef-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:54.990428 master-0 kubenswrapper[28149]: I0313 13:08:54.989888 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/668a51dd-c5b3-4531-b707-39a00bfb5eef-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:54.990428 master-0 kubenswrapper[28149]: I0313 13:08:54.989906 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/668a51dd-c5b3-4531-b707-39a00bfb5eef-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:54.990428 master-0 kubenswrapper[28149]: I0313 13:08:54.989928 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56tdp\" (UniqueName: \"kubernetes.io/projected/668a51dd-c5b3-4531-b707-39a00bfb5eef-kube-api-access-56tdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:54.990428 master-0 kubenswrapper[28149]: I0313 13:08:54.989965 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/668a51dd-c5b3-4531-b707-39a00bfb5eef-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:54.990428 master-0 kubenswrapper[28149]: I0313 13:08:54.989985 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/668a51dd-c5b3-4531-b707-39a00bfb5eef-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:54.990428 master-0 kubenswrapper[28149]: I0313 13:08:54.990029 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/668a51dd-c5b3-4531-b707-39a00bfb5eef-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:54.990428 master-0 kubenswrapper[28149]: I0313 13:08:54.990048 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/668a51dd-c5b3-4531-b707-39a00bfb5eef-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:54.990428 master-0 kubenswrapper[28149]: I0313 13:08:54.990065 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/668a51dd-c5b3-4531-b707-39a00bfb5eef-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.099585 master-0 kubenswrapper[28149]: I0313 13:08:55.099440 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/668a51dd-c5b3-4531-b707-39a00bfb5eef-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.099585 master-0 kubenswrapper[28149]: I0313 13:08:55.099528 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/668a51dd-c5b3-4531-b707-39a00bfb5eef-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.099585 master-0 kubenswrapper[28149]: I0313 13:08:55.099563 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56tdp\" (UniqueName: \"kubernetes.io/projected/668a51dd-c5b3-4531-b707-39a00bfb5eef-kube-api-access-56tdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.099870 master-0 kubenswrapper[28149]: I0313 13:08:55.099602 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/668a51dd-c5b3-4531-b707-39a00bfb5eef-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.099914 master-0 kubenswrapper[28149]: I0313 13:08:55.099882 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/668a51dd-c5b3-4531-b707-39a00bfb5eef-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.101015 master-0 kubenswrapper[28149]: I0313 13:08:55.100984 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/668a51dd-c5b3-4531-b707-39a00bfb5eef-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.101910 master-0 kubenswrapper[28149]: I0313 13:08:55.101885 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/668a51dd-c5b3-4531-b707-39a00bfb5eef-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.102032 master-0 kubenswrapper[28149]: I0313 13:08:55.101928 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/668a51dd-c5b3-4531-b707-39a00bfb5eef-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.102032 master-0 kubenswrapper[28149]: I0313 13:08:55.101954 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/668a51dd-c5b3-4531-b707-39a00bfb5eef-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.104014 master-0 kubenswrapper[28149]: I0313 13:08:55.103990 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/668a51dd-c5b3-4531-b707-39a00bfb5eef-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.104393 master-0 kubenswrapper[28149]: I0313 13:08:55.104334 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/668a51dd-c5b3-4531-b707-39a00bfb5eef-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.104466 master-0 kubenswrapper[28149]: I0313 13:08:55.104401 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/668a51dd-c5b3-4531-b707-39a00bfb5eef-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.104892 master-0 kubenswrapper[28149]: I0313 13:08:55.104849 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/668a51dd-c5b3-4531-b707-39a00bfb5eef-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.105961 master-0 kubenswrapper[28149]: I0313 13:08:55.105930 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/668a51dd-c5b3-4531-b707-39a00bfb5eef-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.105961 master-0 kubenswrapper[28149]: I0313 13:08:55.105836 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/668a51dd-c5b3-4531-b707-39a00bfb5eef-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.106170 master-0 kubenswrapper[28149]: I0313 13:08:55.106101 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/668a51dd-c5b3-4531-b707-39a00bfb5eef-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.106170 master-0 kubenswrapper[28149]: I0313 13:08:55.105441 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/668a51dd-c5b3-4531-b707-39a00bfb5eef-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.110921 master-0 kubenswrapper[28149]: I0313 13:08:55.110863 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/668a51dd-c5b3-4531-b707-39a00bfb5eef-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.124844 master-0 kubenswrapper[28149]: I0313 13:08:55.124784 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/668a51dd-c5b3-4531-b707-39a00bfb5eef-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.208100 master-0 kubenswrapper[28149]: I0313 13:08:55.208048 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9c5228e8-1535-4587-b617-a49397fb64c7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^49b0d5ff-a162-4bad-94c9-4d87be591b35\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.211194 master-0 kubenswrapper[28149]: I0313 13:08:55.209888 28149 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 13:08:55.211194 master-0 kubenswrapper[28149]: I0313 13:08:55.209935 28149 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9c5228e8-1535-4587-b617-a49397fb64c7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^49b0d5ff-a162-4bad-94c9-4d87be591b35\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/55ff38ac3605df7e51fb5d73842716394a400992b7682a2ef67afe59372deb77/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.317195 master-0 kubenswrapper[28149]: I0313 13:08:55.303047 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56tdp\" (UniqueName: \"kubernetes.io/projected/668a51dd-c5b3-4531-b707-39a00bfb5eef-kube-api-access-56tdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:55.437772 master-0 kubenswrapper[28149]: I0313 13:08:55.431548 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 13 13:08:56.617042 master-0 kubenswrapper[28149]: I0313 13:08:56.613261 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"309b8d37-1bcd-4f23-946e-d3eb23e5d072","Type":"ContainerStarted","Data":"4ceb13db563fcf5791c5ce60dfae183ddb92dee43542cb8321ec6fde8159a579"} Mar 13 13:08:56.645873 master-0 kubenswrapper[28149]: I0313 13:08:56.645780 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 13 13:08:56.647876 master-0 kubenswrapper[28149]: I0313 13:08:56.647844 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 13 13:08:56.652267 master-0 kubenswrapper[28149]: I0313 13:08:56.651966 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 13 13:08:56.653701 master-0 kubenswrapper[28149]: I0313 13:08:56.653651 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 13 13:08:56.654778 master-0 kubenswrapper[28149]: I0313 13:08:56.654742 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 13 13:08:56.781786 master-0 kubenswrapper[28149]: I0313 13:08:56.781500 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 13 13:08:56.783738 master-0 kubenswrapper[28149]: I0313 13:08:56.783558 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf9fg\" (UniqueName: \"kubernetes.io/projected/e95954c2-39c2-46f5-8f22-580ba2880939-kube-api-access-lf9fg\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:56.783738 master-0 kubenswrapper[28149]: I0313 13:08:56.783632 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e95954c2-39c2-46f5-8f22-580ba2880939-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:56.783738 master-0 kubenswrapper[28149]: I0313 13:08:56.783682 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e95954c2-39c2-46f5-8f22-580ba2880939-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:56.784210 master-0 kubenswrapper[28149]: I0313 13:08:56.784011 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95954c2-39c2-46f5-8f22-580ba2880939-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:56.784210 master-0 kubenswrapper[28149]: I0313 13:08:56.784046 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e95954c2-39c2-46f5-8f22-580ba2880939-config-data-default\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:56.784210 master-0 kubenswrapper[28149]: I0313 13:08:56.784114 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95954c2-39c2-46f5-8f22-580ba2880939-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:56.784210 master-0 kubenswrapper[28149]: I0313 13:08:56.784151 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-072c0270-a5fc-40cb-9edb-f2d4c3c13438\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d3850668-62a2-4826-8ab8-7173f23fe8dd\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:56.784210 master-0 kubenswrapper[28149]: I0313 13:08:56.784185 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e95954c2-39c2-46f5-8f22-580ba2880939-kolla-config\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.041936 master-0 kubenswrapper[28149]: I0313 13:08:57.040945 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e95954c2-39c2-46f5-8f22-580ba2880939-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.048179 master-0 kubenswrapper[28149]: I0313 13:08:57.032109 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e95954c2-39c2-46f5-8f22-580ba2880939-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.048567 master-0 kubenswrapper[28149]: I0313 13:08:57.048507 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95954c2-39c2-46f5-8f22-580ba2880939-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.048644 master-0 kubenswrapper[28149]: I0313 13:08:57.048596 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e95954c2-39c2-46f5-8f22-580ba2880939-config-data-default\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.053977 master-0 kubenswrapper[28149]: I0313 13:08:57.052909 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95954c2-39c2-46f5-8f22-580ba2880939-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.053977 master-0 kubenswrapper[28149]: I0313 13:08:57.053058 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-072c0270-a5fc-40cb-9edb-f2d4c3c13438\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d3850668-62a2-4826-8ab8-7173f23fe8dd\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.053977 master-0 kubenswrapper[28149]: I0313 13:08:57.053194 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e95954c2-39c2-46f5-8f22-580ba2880939-kolla-config\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.053977 master-0 kubenswrapper[28149]: I0313 13:08:57.053397 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf9fg\" (UniqueName: \"kubernetes.io/projected/e95954c2-39c2-46f5-8f22-580ba2880939-kube-api-access-lf9fg\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.053977 master-0 kubenswrapper[28149]: I0313 13:08:57.053492 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e95954c2-39c2-46f5-8f22-580ba2880939-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.055204 master-0 kubenswrapper[28149]: I0313 13:08:57.055021 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e95954c2-39c2-46f5-8f22-580ba2880939-kolla-config\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.072356 master-0 kubenswrapper[28149]: I0313 13:08:57.059002 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95954c2-39c2-46f5-8f22-580ba2880939-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.072356 master-0 kubenswrapper[28149]: I0313 13:08:57.063202 28149 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 13:08:57.072356 master-0 kubenswrapper[28149]: I0313 13:08:57.063248 28149 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-072c0270-a5fc-40cb-9edb-f2d4c3c13438\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d3850668-62a2-4826-8ab8-7173f23fe8dd\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/50f8c646034d8c49f3c15eabe167a48fab6a7ee8a6418432972da6c309276894/globalmount\"" pod="openstack/openstack-galera-0" Mar 13 13:08:57.082281 master-0 kubenswrapper[28149]: I0313 13:08:57.079555 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e95954c2-39c2-46f5-8f22-580ba2880939-config-data-default\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.098964 master-0 kubenswrapper[28149]: I0313 13:08:57.083216 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e95954c2-39c2-46f5-8f22-580ba2880939-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.098964 master-0 kubenswrapper[28149]: I0313 13:08:57.093303 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95954c2-39c2-46f5-8f22-580ba2880939-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.137074 master-0 kubenswrapper[28149]: I0313 13:08:57.127788 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf9fg\" (UniqueName: \"kubernetes.io/projected/e95954c2-39c2-46f5-8f22-580ba2880939-kube-api-access-lf9fg\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:08:57.225351 master-0 kubenswrapper[28149]: I0313 13:08:57.225286 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b63801bf-5389-40b0-8051-e87420f2b59e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^56e579f0-350c-4477-9d42-ad4694ba1e68\") pod \"rabbitmq-server-0\" (UID: \"f417d425-c062-40de-a92b-17afe412cfe9\") " pod="openstack/rabbitmq-server-0" Mar 13 13:08:57.335467 master-0 kubenswrapper[28149]: I0313 13:08:57.331826 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 13:08:58.483886 master-0 kubenswrapper[28149]: I0313 13:08:58.483811 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 13:08:58.857182 master-0 kubenswrapper[28149]: I0313 13:08:58.856569 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f417d425-c062-40de-a92b-17afe412cfe9","Type":"ContainerStarted","Data":"f9847a650403427e029d8df0aef5591ca5ff1706d55c5497a044a944da906109"} Mar 13 13:08:58.873695 master-0 kubenswrapper[28149]: I0313 13:08:58.873154 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9c5228e8-1535-4587-b617-a49397fb64c7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^49b0d5ff-a162-4bad-94c9-4d87be591b35\") pod \"rabbitmq-cell1-server-0\" (UID: \"668a51dd-c5b3-4531-b707-39a00bfb5eef\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:59.196786 master-0 kubenswrapper[28149]: I0313 13:08:59.194507 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:08:59.230235 master-0 kubenswrapper[28149]: I0313 13:08:59.230179 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 13 13:08:59.259228 master-0 kubenswrapper[28149]: I0313 13:08:59.258113 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.263471 master-0 kubenswrapper[28149]: I0313 13:08:59.263075 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 13 13:08:59.264574 master-0 kubenswrapper[28149]: I0313 13:08:59.263866 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 13 13:08:59.265122 master-0 kubenswrapper[28149]: I0313 13:08:59.265015 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 13 13:08:59.276796 master-0 kubenswrapper[28149]: I0313 13:08:59.257277 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 13 13:08:59.637957 master-0 kubenswrapper[28149]: I0313 13:08:59.621516 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.637957 master-0 kubenswrapper[28149]: I0313 13:08:59.621639 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.637957 master-0 kubenswrapper[28149]: I0313 13:08:59.621658 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.637957 master-0 kubenswrapper[28149]: I0313 13:08:59.621703 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.637957 master-0 kubenswrapper[28149]: I0313 13:08:59.621742 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs59p\" (UniqueName: \"kubernetes.io/projected/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-kube-api-access-zs59p\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.637957 master-0 kubenswrapper[28149]: I0313 13:08:59.621786 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d30d8e42-ff2c-4b44-a0c6-c951d506d7a3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dac418b2-fe11-462f-9218-07d86d1b4d00\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.637957 master-0 kubenswrapper[28149]: I0313 13:08:59.621820 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.637957 master-0 kubenswrapper[28149]: I0313 13:08:59.621865 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.724624 master-0 kubenswrapper[28149]: I0313 13:08:59.724471 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.724624 master-0 kubenswrapper[28149]: I0313 13:08:59.724570 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.724624 master-0 kubenswrapper[28149]: I0313 13:08:59.724593 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.724624 master-0 kubenswrapper[28149]: I0313 13:08:59.724630 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.724624 master-0 kubenswrapper[28149]: I0313 13:08:59.724668 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs59p\" (UniqueName: \"kubernetes.io/projected/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-kube-api-access-zs59p\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.724624 master-0 kubenswrapper[28149]: I0313 13:08:59.724691 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d30d8e42-ff2c-4b44-a0c6-c951d506d7a3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dac418b2-fe11-462f-9218-07d86d1b4d00\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.724624 master-0 kubenswrapper[28149]: I0313 13:08:59.724715 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.724624 master-0 kubenswrapper[28149]: I0313 13:08:59.724747 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.731395 master-0 kubenswrapper[28149]: I0313 13:08:59.725491 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.731395 master-0 kubenswrapper[28149]: I0313 13:08:59.726095 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.731395 master-0 kubenswrapper[28149]: I0313 13:08:59.727686 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.731395 master-0 kubenswrapper[28149]: I0313 13:08:59.727827 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.731395 master-0 kubenswrapper[28149]: I0313 13:08:59.728342 28149 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 13:08:59.731395 master-0 kubenswrapper[28149]: I0313 13:08:59.728365 28149 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d30d8e42-ff2c-4b44-a0c6-c951d506d7a3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dac418b2-fe11-462f-9218-07d86d1b4d00\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6963c578c8e9045a56cc2df9cb15d4dfed6c35293d89a6d847e1d4ee6d45f005/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.768476 master-0 kubenswrapper[28149]: I0313 13:08:59.752355 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.768476 master-0 kubenswrapper[28149]: I0313 13:08:59.752670 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:08:59.768476 master-0 kubenswrapper[28149]: I0313 13:08:59.757206 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs59p\" (UniqueName: \"kubernetes.io/projected/ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4-kube-api-access-zs59p\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:09:00.171159 master-0 kubenswrapper[28149]: I0313 13:09:00.171106 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 13:09:00.340729 master-0 kubenswrapper[28149]: I0313 13:09:00.329235 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-072c0270-a5fc-40cb-9edb-f2d4c3c13438\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d3850668-62a2-4826-8ab8-7173f23fe8dd\") pod \"openstack-galera-0\" (UID: \"e95954c2-39c2-46f5-8f22-580ba2880939\") " pod="openstack/openstack-galera-0" Mar 13 13:09:00.649661 master-0 kubenswrapper[28149]: I0313 13:09:00.649611 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 13 13:09:01.307757 master-0 kubenswrapper[28149]: I0313 13:09:01.305706 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"668a51dd-c5b3-4531-b707-39a00bfb5eef","Type":"ContainerStarted","Data":"8bfc687778c5db5001df7efb3661a3c30f95b62115ad223e687cd5f5a9202dfd"} Mar 13 13:09:01.519031 master-0 kubenswrapper[28149]: I0313 13:09:01.518948 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 13 13:09:01.742291 master-0 kubenswrapper[28149]: I0313 13:09:01.742118 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d30d8e42-ff2c-4b44-a0c6-c951d506d7a3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dac418b2-fe11-462f-9218-07d86d1b4d00\") pod \"openstack-cell1-galera-0\" (UID: \"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 13:09:01.774255 master-0 kubenswrapper[28149]: I0313 13:09:01.774201 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 13 13:09:02.391250 master-0 kubenswrapper[28149]: E0313 13:09:02.391119 28149 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode95954c2_39c2_46f5_8f22_580ba2880939.slice/crio-7395034159b5b32cf3565c6b14e8fc91993f3a1801e888b7c76df24a03e07034\": RecentStats: unable to find data in memory cache]" Mar 13 13:09:02.400612 master-0 kubenswrapper[28149]: I0313 13:09:02.400461 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5qn6j"] Mar 13 13:09:02.410806 master-0 kubenswrapper[28149]: I0313 13:09:02.410729 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.424546 master-0 kubenswrapper[28149]: I0313 13:09:02.424472 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 13 13:09:02.446936 master-0 kubenswrapper[28149]: I0313 13:09:02.432507 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5qn6j"] Mar 13 13:09:02.462280 master-0 kubenswrapper[28149]: I0313 13:09:02.460345 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 13 13:09:02.465505 master-0 kubenswrapper[28149]: I0313 13:09:02.464070 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-m2gpt"] Mar 13 13:09:02.495569 master-0 kubenswrapper[28149]: I0313 13:09:02.487871 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-m2gpt"] Mar 13 13:09:02.495569 master-0 kubenswrapper[28149]: I0313 13:09:02.488004 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.506325 master-0 kubenswrapper[28149]: I0313 13:09:02.503327 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-combined-ca-bundle\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.516239 master-0 kubenswrapper[28149]: I0313 13:09:02.509046 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-ovn-controller-tls-certs\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.516239 master-0 kubenswrapper[28149]: I0313 13:09:02.509106 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-var-log-ovn\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.516239 master-0 kubenswrapper[28149]: I0313 13:09:02.509238 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-var-run-ovn\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.516239 master-0 kubenswrapper[28149]: I0313 13:09:02.509403 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-scripts\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.516239 master-0 kubenswrapper[28149]: I0313 13:09:02.509437 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-var-run\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.516239 master-0 kubenswrapper[28149]: I0313 13:09:02.509556 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxtml\" (UniqueName: \"kubernetes.io/projected/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-kube-api-access-kxtml\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.548577 master-0 kubenswrapper[28149]: I0313 13:09:02.548505 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e95954c2-39c2-46f5-8f22-580ba2880939","Type":"ContainerStarted","Data":"7395034159b5b32cf3565c6b14e8fc91993f3a1801e888b7c76df24a03e07034"} Mar 13 13:09:02.661589 master-0 kubenswrapper[28149]: I0313 13:09:02.661475 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxtml\" (UniqueName: \"kubernetes.io/projected/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-kube-api-access-kxtml\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.661822 master-0 kubenswrapper[28149]: I0313 13:09:02.661591 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-combined-ca-bundle\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.661822 master-0 kubenswrapper[28149]: I0313 13:09:02.661649 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-ovn-controller-tls-certs\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.661822 master-0 kubenswrapper[28149]: I0313 13:09:02.661665 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-var-log-ovn\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.661822 master-0 kubenswrapper[28149]: I0313 13:09:02.661694 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-var-run-ovn\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.661822 master-0 kubenswrapper[28149]: I0313 13:09:02.661734 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-scripts\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.661822 master-0 kubenswrapper[28149]: I0313 13:09:02.661761 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-var-run\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.663760 master-0 kubenswrapper[28149]: I0313 13:09:02.663702 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-var-run-ovn\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.663865 master-0 kubenswrapper[28149]: I0313 13:09:02.663769 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-var-run\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.664444 master-0 kubenswrapper[28149]: I0313 13:09:02.664420 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-var-log-ovn\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.666848 master-0 kubenswrapper[28149]: I0313 13:09:02.666815 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-scripts\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.688289 master-0 kubenswrapper[28149]: I0313 13:09:02.686985 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-combined-ca-bundle\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.731703 master-0 kubenswrapper[28149]: I0313 13:09:02.727032 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-ovn-controller-tls-certs\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.731703 master-0 kubenswrapper[28149]: I0313 13:09:02.728011 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxtml\" (UniqueName: \"kubernetes.io/projected/4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2-kube-api-access-kxtml\") pod \"ovn-controller-5qn6j\" (UID: \"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2\") " pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.756693 master-0 kubenswrapper[28149]: I0313 13:09:02.755474 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:02.770558 master-0 kubenswrapper[28149]: I0313 13:09:02.770429 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-var-log\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.770558 master-0 kubenswrapper[28149]: I0313 13:09:02.770525 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-etc-ovs\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.770558 master-0 kubenswrapper[28149]: I0313 13:09:02.770554 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-scripts\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.773040 master-0 kubenswrapper[28149]: I0313 13:09:02.772436 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttmnf\" (UniqueName: \"kubernetes.io/projected/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-kube-api-access-ttmnf\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.777029 master-0 kubenswrapper[28149]: I0313 13:09:02.775287 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-var-run\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.777029 master-0 kubenswrapper[28149]: I0313 13:09:02.775621 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-var-lib\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.885680 master-0 kubenswrapper[28149]: I0313 13:09:02.884241 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-var-log\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.885680 master-0 kubenswrapper[28149]: I0313 13:09:02.884318 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-etc-ovs\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.885680 master-0 kubenswrapper[28149]: I0313 13:09:02.884339 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-scripts\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.885680 master-0 kubenswrapper[28149]: I0313 13:09:02.884613 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-var-log\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.885680 master-0 kubenswrapper[28149]: I0313 13:09:02.884646 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttmnf\" (UniqueName: \"kubernetes.io/projected/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-kube-api-access-ttmnf\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.885680 master-0 kubenswrapper[28149]: I0313 13:09:02.885006 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-var-run\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.885680 master-0 kubenswrapper[28149]: I0313 13:09:02.885357 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-etc-ovs\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.885680 master-0 kubenswrapper[28149]: I0313 13:09:02.885480 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-var-run\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.886127 master-0 kubenswrapper[28149]: I0313 13:09:02.885762 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-var-lib\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.886127 master-0 kubenswrapper[28149]: I0313 13:09:02.886071 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-var-lib\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.886715 master-0 kubenswrapper[28149]: I0313 13:09:02.886675 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-scripts\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:02.915323 master-0 kubenswrapper[28149]: I0313 13:09:02.915170 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttmnf\" (UniqueName: \"kubernetes.io/projected/c13ed93f-4fcf-43b8-92ef-f479c5a4af68-kube-api-access-ttmnf\") pod \"ovn-controller-ovs-m2gpt\" (UID: \"c13ed93f-4fcf-43b8-92ef-f479c5a4af68\") " pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:03.087597 master-0 kubenswrapper[28149]: I0313 13:09:03.083326 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 13 13:09:03.117661 master-0 kubenswrapper[28149]: I0313 13:09:03.117600 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:05.230867 master-0 kubenswrapper[28149]: I0313 13:09:05.230821 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 13 13:09:05.257182 master-0 kubenswrapper[28149]: I0313 13:09:05.257106 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.264688 master-0 kubenswrapper[28149]: I0313 13:09:05.260709 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 13 13:09:05.264688 master-0 kubenswrapper[28149]: I0313 13:09:05.262272 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 13 13:09:05.264688 master-0 kubenswrapper[28149]: I0313 13:09:05.262898 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 13 13:09:05.264688 master-0 kubenswrapper[28149]: I0313 13:09:05.263076 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 13 13:09:05.265424 master-0 kubenswrapper[28149]: I0313 13:09:05.265053 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 13 13:09:05.361813 master-0 kubenswrapper[28149]: I0313 13:09:05.361750 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/166fd514-febd-49e6-8e22-bc0faaafc25b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.361813 master-0 kubenswrapper[28149]: I0313 13:09:05.361812 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/166fd514-febd-49e6-8e22-bc0faaafc25b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.362082 master-0 kubenswrapper[28149]: I0313 13:09:05.361878 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/166fd514-febd-49e6-8e22-bc0faaafc25b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.362082 master-0 kubenswrapper[28149]: I0313 13:09:05.361942 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxj6p\" (UniqueName: \"kubernetes.io/projected/166fd514-febd-49e6-8e22-bc0faaafc25b-kube-api-access-qxj6p\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.362174 master-0 kubenswrapper[28149]: I0313 13:09:05.362082 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4b6d5607-5f58-430c-9659-641356dad970\" (UniqueName: \"kubernetes.io/csi/topolvm.io^47645a92-c555-4e52-9ec8-83c4d0715805\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.362240 master-0 kubenswrapper[28149]: I0313 13:09:05.362209 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/166fd514-febd-49e6-8e22-bc0faaafc25b-config\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.362299 master-0 kubenswrapper[28149]: I0313 13:09:05.362238 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/166fd514-febd-49e6-8e22-bc0faaafc25b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.362299 master-0 kubenswrapper[28149]: I0313 13:09:05.362280 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/166fd514-febd-49e6-8e22-bc0faaafc25b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.466074 master-0 kubenswrapper[28149]: I0313 13:09:05.466009 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/166fd514-febd-49e6-8e22-bc0faaafc25b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.466356 master-0 kubenswrapper[28149]: I0313 13:09:05.466307 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxj6p\" (UniqueName: \"kubernetes.io/projected/166fd514-febd-49e6-8e22-bc0faaafc25b-kube-api-access-qxj6p\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.466414 master-0 kubenswrapper[28149]: I0313 13:09:05.466368 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4b6d5607-5f58-430c-9659-641356dad970\" (UniqueName: \"kubernetes.io/csi/topolvm.io^47645a92-c555-4e52-9ec8-83c4d0715805\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.466495 master-0 kubenswrapper[28149]: I0313 13:09:05.466421 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/166fd514-febd-49e6-8e22-bc0faaafc25b-config\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.466495 master-0 kubenswrapper[28149]: I0313 13:09:05.466448 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/166fd514-febd-49e6-8e22-bc0faaafc25b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.466495 master-0 kubenswrapper[28149]: I0313 13:09:05.466480 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/166fd514-febd-49e6-8e22-bc0faaafc25b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.466891 master-0 kubenswrapper[28149]: I0313 13:09:05.466845 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/166fd514-febd-49e6-8e22-bc0faaafc25b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.467284 master-0 kubenswrapper[28149]: I0313 13:09:05.467233 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/166fd514-febd-49e6-8e22-bc0faaafc25b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.467428 master-0 kubenswrapper[28149]: I0313 13:09:05.467373 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/166fd514-febd-49e6-8e22-bc0faaafc25b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.468039 master-0 kubenswrapper[28149]: I0313 13:09:05.467998 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/166fd514-febd-49e6-8e22-bc0faaafc25b-config\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.468772 master-0 kubenswrapper[28149]: I0313 13:09:05.468602 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/166fd514-febd-49e6-8e22-bc0faaafc25b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.469918 master-0 kubenswrapper[28149]: I0313 13:09:05.469888 28149 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 13:09:05.470005 master-0 kubenswrapper[28149]: I0313 13:09:05.469927 28149 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4b6d5607-5f58-430c-9659-641356dad970\" (UniqueName: \"kubernetes.io/csi/topolvm.io^47645a92-c555-4e52-9ec8-83c4d0715805\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/87a6876fc4f94b9cc6b5a01949f187cf5af526e15f990c22178138bf52fe28bd/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.476623 master-0 kubenswrapper[28149]: I0313 13:09:05.473626 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/166fd514-febd-49e6-8e22-bc0faaafc25b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.478158 master-0 kubenswrapper[28149]: I0313 13:09:05.477974 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/166fd514-febd-49e6-8e22-bc0faaafc25b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.478329 master-0 kubenswrapper[28149]: I0313 13:09:05.478275 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/166fd514-febd-49e6-8e22-bc0faaafc25b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.492543 master-0 kubenswrapper[28149]: I0313 13:09:05.491786 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxj6p\" (UniqueName: \"kubernetes.io/projected/166fd514-febd-49e6-8e22-bc0faaafc25b-kube-api-access-qxj6p\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:05.523508 master-0 kubenswrapper[28149]: I0313 13:09:05.523351 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 13 13:09:05.533724 master-0 kubenswrapper[28149]: I0313 13:09:05.525684 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.537317 master-0 kubenswrapper[28149]: I0313 13:09:05.534480 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 13 13:09:05.537317 master-0 kubenswrapper[28149]: I0313 13:09:05.534787 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 13 13:09:05.543504 master-0 kubenswrapper[28149]: I0313 13:09:05.541824 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 13 13:09:05.551245 master-0 kubenswrapper[28149]: I0313 13:09:05.548662 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 13 13:09:05.676667 master-0 kubenswrapper[28149]: I0313 13:09:05.676002 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/256399c4-c376-4836-9483-76a46694994a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.676667 master-0 kubenswrapper[28149]: I0313 13:09:05.676067 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7fp7\" (UniqueName: \"kubernetes.io/projected/256399c4-c376-4836-9483-76a46694994a-kube-api-access-t7fp7\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.676667 master-0 kubenswrapper[28149]: I0313 13:09:05.676096 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/256399c4-c376-4836-9483-76a46694994a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.676667 master-0 kubenswrapper[28149]: I0313 13:09:05.676225 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/256399c4-c376-4836-9483-76a46694994a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.676667 master-0 kubenswrapper[28149]: I0313 13:09:05.676246 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/256399c4-c376-4836-9483-76a46694994a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.676667 master-0 kubenswrapper[28149]: I0313 13:09:05.676305 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256399c4-c376-4836-9483-76a46694994a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.676667 master-0 kubenswrapper[28149]: I0313 13:09:05.676324 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e54b255e-11e4-48dc-9de3-e55cd990498f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^37103432-6966-41dc-965c-09cb2fdc7c8b\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.676667 master-0 kubenswrapper[28149]: I0313 13:09:05.676354 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/256399c4-c376-4836-9483-76a46694994a-config\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.779766 master-0 kubenswrapper[28149]: I0313 13:09:05.779708 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/256399c4-c376-4836-9483-76a46694994a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.779975 master-0 kubenswrapper[28149]: I0313 13:09:05.779891 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/256399c4-c376-4836-9483-76a46694994a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.780019 master-0 kubenswrapper[28149]: I0313 13:09:05.779993 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256399c4-c376-4836-9483-76a46694994a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.780054 master-0 kubenswrapper[28149]: I0313 13:09:05.780022 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e54b255e-11e4-48dc-9de3-e55cd990498f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^37103432-6966-41dc-965c-09cb2fdc7c8b\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.780102 master-0 kubenswrapper[28149]: I0313 13:09:05.780060 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/256399c4-c376-4836-9483-76a46694994a-config\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.780633 master-0 kubenswrapper[28149]: I0313 13:09:05.780384 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/256399c4-c376-4836-9483-76a46694994a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.780688 master-0 kubenswrapper[28149]: I0313 13:09:05.780642 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7fp7\" (UniqueName: \"kubernetes.io/projected/256399c4-c376-4836-9483-76a46694994a-kube-api-access-t7fp7\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.780688 master-0 kubenswrapper[28149]: I0313 13:09:05.780680 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/256399c4-c376-4836-9483-76a46694994a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.781041 master-0 kubenswrapper[28149]: I0313 13:09:05.781013 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/256399c4-c376-4836-9483-76a46694994a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.781781 master-0 kubenswrapper[28149]: I0313 13:09:05.781736 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/256399c4-c376-4836-9483-76a46694994a-config\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.783945 master-0 kubenswrapper[28149]: I0313 13:09:05.783909 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/256399c4-c376-4836-9483-76a46694994a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.785451 master-0 kubenswrapper[28149]: I0313 13:09:05.785415 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/256399c4-c376-4836-9483-76a46694994a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.787236 master-0 kubenswrapper[28149]: I0313 13:09:05.787212 28149 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 13:09:05.787326 master-0 kubenswrapper[28149]: I0313 13:09:05.787241 28149 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e54b255e-11e4-48dc-9de3-e55cd990498f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^37103432-6966-41dc-965c-09cb2fdc7c8b\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/15652ee2c46e8e35acddaef9fc63062cfeba58cd8100b647ba3f9a815e3c853b/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.787541 master-0 kubenswrapper[28149]: I0313 13:09:05.787504 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/256399c4-c376-4836-9483-76a46694994a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.788154 master-0 kubenswrapper[28149]: I0313 13:09:05.788115 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256399c4-c376-4836-9483-76a46694994a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:05.799909 master-0 kubenswrapper[28149]: I0313 13:09:05.799858 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7fp7\" (UniqueName: \"kubernetes.io/projected/256399c4-c376-4836-9483-76a46694994a-kube-api-access-t7fp7\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:07.125210 master-0 kubenswrapper[28149]: I0313 13:09:07.125155 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4b6d5607-5f58-430c-9659-641356dad970\" (UniqueName: \"kubernetes.io/csi/topolvm.io^47645a92-c555-4e52-9ec8-83c4d0715805\") pod \"ovsdbserver-sb-0\" (UID: \"166fd514-febd-49e6-8e22-bc0faaafc25b\") " pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:07.398677 master-0 kubenswrapper[28149]: I0313 13:09:07.398550 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:08.462419 master-0 kubenswrapper[28149]: I0313 13:09:08.462284 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e54b255e-11e4-48dc-9de3-e55cd990498f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^37103432-6966-41dc-965c-09cb2fdc7c8b\") pod \"ovsdbserver-nb-0\" (UID: \"256399c4-c376-4836-9483-76a46694994a\") " pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:08.584606 master-0 kubenswrapper[28149]: I0313 13:09:08.584521 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:08.637040 master-0 kubenswrapper[28149]: I0313 13:09:08.636841 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4","Type":"ContainerStarted","Data":"c9ba6454203c902836f205b0377254c9d3a766d0d0731d19a957bdb1f4d4271b"} Mar 13 13:09:23.757064 master-0 kubenswrapper[28149]: I0313 13:09:23.756871 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5qn6j"] Mar 13 13:09:23.925163 master-0 kubenswrapper[28149]: I0313 13:09:23.909625 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5qn6j" event={"ID":"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2","Type":"ContainerStarted","Data":"ca5e9be2ea419ecfd22c8c61a61951df90729ce0f12cfed00add194150e833eb"} Mar 13 13:09:23.942168 master-0 kubenswrapper[28149]: I0313 13:09:23.937721 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"309b8d37-1bcd-4f23-946e-d3eb23e5d072","Type":"ContainerStarted","Data":"ee1df53807be6143f3f7654c5c01ef2efa00e610003e1036f8dd017702692255"} Mar 13 13:09:23.942168 master-0 kubenswrapper[28149]: I0313 13:09:23.939316 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 13 13:09:23.948201 master-0 kubenswrapper[28149]: I0313 13:09:23.944751 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-q6j75" event={"ID":"2c02a249-c9cd-4660-b69b-cf03f864c992","Type":"ContainerStarted","Data":"7184cc60811d8467e409a47e8f61f248d7937bdada81fb65d3364f6b462a62ea"} Mar 13 13:09:23.955231 master-0 kubenswrapper[28149]: I0313 13:09:23.952561 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4","Type":"ContainerStarted","Data":"0fe2ae1bdeb6eb0a16d9d63e09433b442bba4e56f35273943909f235211ebbef"} Mar 13 13:09:23.957275 master-0 kubenswrapper[28149]: I0313 13:09:23.956576 28149 generic.go:334] "Generic (PLEG): container finished" podID="c04eff7d-dbbf-4174-8d0e-71046963aca5" containerID="ecb27f6b8c3e0a56be61b3a4483bf9e880c346d0fcae3221ca6ce3dceea4bf57" exitCode=0 Mar 13 13:09:23.957275 master-0 kubenswrapper[28149]: I0313 13:09:23.956666 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" event={"ID":"c04eff7d-dbbf-4174-8d0e-71046963aca5","Type":"ContainerDied","Data":"ecb27f6b8c3e0a56be61b3a4483bf9e880c346d0fcae3221ca6ce3dceea4bf57"} Mar 13 13:09:23.987428 master-0 kubenswrapper[28149]: I0313 13:09:23.974543 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e95954c2-39c2-46f5-8f22-580ba2880939","Type":"ContainerStarted","Data":"0320b22b748045322e24589424897a63dcf309589c46a89ce0b500b988200389"} Mar 13 13:09:23.988343 master-0 kubenswrapper[28149]: I0313 13:09:23.988264 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" event={"ID":"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01","Type":"ContainerStarted","Data":"2e93b7c7b28ba7b25377e317ea55e9d6c7f25575950c735bdfeb90a9b6628d32"} Mar 13 13:09:23.989750 master-0 kubenswrapper[28149]: I0313 13:09:23.989625 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.431686852 podStartE2EDuration="30.989546721s" podCreationTimestamp="2026-03-13 13:08:53 +0000 UTC" firstStartedPulling="2026-03-13 13:08:55.433879678 +0000 UTC m=+909.087344837" lastFinishedPulling="2026-03-13 13:09:22.991739547 +0000 UTC m=+936.645204706" observedRunningTime="2026-03-13 13:09:23.968700441 +0000 UTC m=+937.622165610" watchObservedRunningTime="2026-03-13 13:09:23.989546721 +0000 UTC m=+937.643011890" Mar 13 13:09:23.997892 master-0 kubenswrapper[28149]: I0313 13:09:23.997763 28149 generic.go:334] "Generic (PLEG): container finished" podID="52e53cd9-c831-4aa8-ae1c-5912efb14c13" containerID="0fb8952012e4c8ddf708c6c1f48b964758311e2d3f623ca5a82a0cdd96320024" exitCode=0 Mar 13 13:09:23.997892 master-0 kubenswrapper[28149]: I0313 13:09:23.997852 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" event={"ID":"52e53cd9-c831-4aa8-ae1c-5912efb14c13","Type":"ContainerDied","Data":"0fb8952012e4c8ddf708c6c1f48b964758311e2d3f623ca5a82a0cdd96320024"} Mar 13 13:09:24.009920 master-0 kubenswrapper[28149]: I0313 13:09:24.002945 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 13 13:09:24.176941 master-0 kubenswrapper[28149]: I0313 13:09:24.176196 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 13 13:09:24.618723 master-0 kubenswrapper[28149]: I0313 13:09:24.618219 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-m2gpt"] Mar 13 13:09:24.702938 master-0 kubenswrapper[28149]: I0313 13:09:24.702814 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-q6j75" Mar 13 13:09:24.733199 master-0 kubenswrapper[28149]: I0313 13:09:24.733130 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" Mar 13 13:09:25.007243 master-0 kubenswrapper[28149]: I0313 13:09:25.007178 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f65c5\" (UniqueName: \"kubernetes.io/projected/2c02a249-c9cd-4660-b69b-cf03f864c992-kube-api-access-f65c5\") pod \"2c02a249-c9cd-4660-b69b-cf03f864c992\" (UID: \"2c02a249-c9cd-4660-b69b-cf03f864c992\") " Mar 13 13:09:25.007996 master-0 kubenswrapper[28149]: I0313 13:09:25.007329 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c04eff7d-dbbf-4174-8d0e-71046963aca5-config\") pod \"c04eff7d-dbbf-4174-8d0e-71046963aca5\" (UID: \"c04eff7d-dbbf-4174-8d0e-71046963aca5\") " Mar 13 13:09:25.007996 master-0 kubenswrapper[28149]: I0313 13:09:25.007430 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c04eff7d-dbbf-4174-8d0e-71046963aca5-dns-svc\") pod \"c04eff7d-dbbf-4174-8d0e-71046963aca5\" (UID: \"c04eff7d-dbbf-4174-8d0e-71046963aca5\") " Mar 13 13:09:25.007996 master-0 kubenswrapper[28149]: I0313 13:09:25.007515 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c02a249-c9cd-4660-b69b-cf03f864c992-config\") pod \"2c02a249-c9cd-4660-b69b-cf03f864c992\" (UID: \"2c02a249-c9cd-4660-b69b-cf03f864c992\") " Mar 13 13:09:25.007996 master-0 kubenswrapper[28149]: I0313 13:09:25.007591 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwpbm\" (UniqueName: \"kubernetes.io/projected/c04eff7d-dbbf-4174-8d0e-71046963aca5-kube-api-access-mwpbm\") pod \"c04eff7d-dbbf-4174-8d0e-71046963aca5\" (UID: \"c04eff7d-dbbf-4174-8d0e-71046963aca5\") " Mar 13 13:09:25.025811 master-0 kubenswrapper[28149]: I0313 13:09:25.025745 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" event={"ID":"c04eff7d-dbbf-4174-8d0e-71046963aca5","Type":"ContainerDied","Data":"3eebabf895fd77da6136e3c4bf0f40ccd20db0ebce526227ae68b0885046e203"} Mar 13 13:09:25.026048 master-0 kubenswrapper[28149]: I0313 13:09:25.025868 28149 scope.go:117] "RemoveContainer" containerID="ecb27f6b8c3e0a56be61b3a4483bf9e880c346d0fcae3221ca6ce3dceea4bf57" Mar 13 13:09:25.026100 master-0 kubenswrapper[28149]: I0313 13:09:25.026064 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-ljn2q" Mar 13 13:09:25.031151 master-0 kubenswrapper[28149]: I0313 13:09:25.030925 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c02a249-c9cd-4660-b69b-cf03f864c992-kube-api-access-f65c5" (OuterVolumeSpecName: "kube-api-access-f65c5") pod "2c02a249-c9cd-4660-b69b-cf03f864c992" (UID: "2c02a249-c9cd-4660-b69b-cf03f864c992"). InnerVolumeSpecName "kube-api-access-f65c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:09:25.033857 master-0 kubenswrapper[28149]: I0313 13:09:25.033804 28149 generic.go:334] "Generic (PLEG): container finished" podID="afde1d0c-cef9-4fb3-94d0-f88cab0b4e01" containerID="2e93b7c7b28ba7b25377e317ea55e9d6c7f25575950c735bdfeb90a9b6628d32" exitCode=0 Mar 13 13:09:25.033935 master-0 kubenswrapper[28149]: I0313 13:09:25.033917 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" event={"ID":"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01","Type":"ContainerDied","Data":"2e93b7c7b28ba7b25377e317ea55e9d6c7f25575950c735bdfeb90a9b6628d32"} Mar 13 13:09:25.033995 master-0 kubenswrapper[28149]: I0313 13:09:25.033947 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" event={"ID":"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01","Type":"ContainerStarted","Data":"d131bfef4eb105aff051c55f10bdb6b294ee6cf9c636d703df83cfe2951d467e"} Mar 13 13:09:25.035192 master-0 kubenswrapper[28149]: I0313 13:09:25.035172 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" Mar 13 13:09:25.035845 master-0 kubenswrapper[28149]: I0313 13:09:25.035778 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04eff7d-dbbf-4174-8d0e-71046963aca5-kube-api-access-mwpbm" (OuterVolumeSpecName: "kube-api-access-mwpbm") pod "c04eff7d-dbbf-4174-8d0e-71046963aca5" (UID: "c04eff7d-dbbf-4174-8d0e-71046963aca5"). InnerVolumeSpecName "kube-api-access-mwpbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:09:25.042841 master-0 kubenswrapper[28149]: I0313 13:09:25.042451 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"166fd514-febd-49e6-8e22-bc0faaafc25b","Type":"ContainerStarted","Data":"6b797a41567e01429b15e21e4b41dd995785bcf39f7e892c162051d5f831e0dd"} Mar 13 13:09:25.047052 master-0 kubenswrapper[28149]: I0313 13:09:25.046983 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c04eff7d-dbbf-4174-8d0e-71046963aca5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c04eff7d-dbbf-4174-8d0e-71046963aca5" (UID: "c04eff7d-dbbf-4174-8d0e-71046963aca5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:25.049250 master-0 kubenswrapper[28149]: I0313 13:09:25.049155 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"256399c4-c376-4836-9483-76a46694994a","Type":"ContainerStarted","Data":"ccee778a91e552c6743a71dd4cbc907e83a300a485191305e3a9fdfafe2b2e08"} Mar 13 13:09:25.051117 master-0 kubenswrapper[28149]: I0313 13:09:25.050948 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" event={"ID":"52e53cd9-c831-4aa8-ae1c-5912efb14c13","Type":"ContainerStarted","Data":"7a8cf01980f37d8c0c62d9d800f89a4d15d4a004104f7dfff41937618741b9fe"} Mar 13 13:09:25.051117 master-0 kubenswrapper[28149]: I0313 13:09:25.051104 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" Mar 13 13:09:25.052192 master-0 kubenswrapper[28149]: I0313 13:09:25.052155 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m2gpt" event={"ID":"c13ed93f-4fcf-43b8-92ef-f479c5a4af68","Type":"ContainerStarted","Data":"9c5b44f1dd6f28bc746cb12da931d30e093b4a374bcd49c0f77681bce358ad3a"} Mar 13 13:09:25.053474 master-0 kubenswrapper[28149]: I0313 13:09:25.053439 28149 generic.go:334] "Generic (PLEG): container finished" podID="2c02a249-c9cd-4660-b69b-cf03f864c992" containerID="7184cc60811d8467e409a47e8f61f248d7937bdada81fb65d3364f6b462a62ea" exitCode=0 Mar 13 13:09:25.053565 master-0 kubenswrapper[28149]: I0313 13:09:25.053533 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-q6j75" Mar 13 13:09:25.053614 master-0 kubenswrapper[28149]: I0313 13:09:25.053541 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-q6j75" event={"ID":"2c02a249-c9cd-4660-b69b-cf03f864c992","Type":"ContainerDied","Data":"7184cc60811d8467e409a47e8f61f248d7937bdada81fb65d3364f6b462a62ea"} Mar 13 13:09:25.053646 master-0 kubenswrapper[28149]: I0313 13:09:25.053619 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-q6j75" event={"ID":"2c02a249-c9cd-4660-b69b-cf03f864c992","Type":"ContainerDied","Data":"004a3f69de1691c71f4bf8470fbe3704b02e780ab5fd9e3278638adfc59b19a7"} Mar 13 13:09:25.077483 master-0 kubenswrapper[28149]: I0313 13:09:25.063385 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c02a249-c9cd-4660-b69b-cf03f864c992-config" (OuterVolumeSpecName: "config") pod "2c02a249-c9cd-4660-b69b-cf03f864c992" (UID: "2c02a249-c9cd-4660-b69b-cf03f864c992"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:25.077483 master-0 kubenswrapper[28149]: I0313 13:09:25.070086 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" podStartSLOduration=3.902023712 podStartE2EDuration="35.070062196s" podCreationTimestamp="2026-03-13 13:08:50 +0000 UTC" firstStartedPulling="2026-03-13 13:08:52.180196666 +0000 UTC m=+905.833661835" lastFinishedPulling="2026-03-13 13:09:23.34823516 +0000 UTC m=+937.001700319" observedRunningTime="2026-03-13 13:09:25.058852445 +0000 UTC m=+938.712317604" watchObservedRunningTime="2026-03-13 13:09:25.070062196 +0000 UTC m=+938.723527365" Mar 13 13:09:25.081201 master-0 kubenswrapper[28149]: I0313 13:09:25.080976 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c04eff7d-dbbf-4174-8d0e-71046963aca5-config" (OuterVolumeSpecName: "config") pod "c04eff7d-dbbf-4174-8d0e-71046963aca5" (UID: "c04eff7d-dbbf-4174-8d0e-71046963aca5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:25.089848 master-0 kubenswrapper[28149]: I0313 13:09:25.089762 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" podStartSLOduration=4.116087878 podStartE2EDuration="36.089738875s" podCreationTimestamp="2026-03-13 13:08:49 +0000 UTC" firstStartedPulling="2026-03-13 13:08:51.247978043 +0000 UTC m=+904.901443202" lastFinishedPulling="2026-03-13 13:09:23.22162904 +0000 UTC m=+936.875094199" observedRunningTime="2026-03-13 13:09:25.086759425 +0000 UTC m=+938.740224584" watchObservedRunningTime="2026-03-13 13:09:25.089738875 +0000 UTC m=+938.743204024" Mar 13 13:09:25.114165 master-0 kubenswrapper[28149]: I0313 13:09:25.114079 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f65c5\" (UniqueName: \"kubernetes.io/projected/2c02a249-c9cd-4660-b69b-cf03f864c992-kube-api-access-f65c5\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:25.114312 master-0 kubenswrapper[28149]: I0313 13:09:25.114199 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c04eff7d-dbbf-4174-8d0e-71046963aca5-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:25.114312 master-0 kubenswrapper[28149]: I0313 13:09:25.114217 28149 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c04eff7d-dbbf-4174-8d0e-71046963aca5-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:25.114312 master-0 kubenswrapper[28149]: I0313 13:09:25.114229 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c02a249-c9cd-4660-b69b-cf03f864c992-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:25.114312 master-0 kubenswrapper[28149]: I0313 13:09:25.114242 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwpbm\" (UniqueName: \"kubernetes.io/projected/c04eff7d-dbbf-4174-8d0e-71046963aca5-kube-api-access-mwpbm\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:25.130323 master-0 kubenswrapper[28149]: I0313 13:09:25.130230 28149 scope.go:117] "RemoveContainer" containerID="7184cc60811d8467e409a47e8f61f248d7937bdada81fb65d3364f6b462a62ea" Mar 13 13:09:25.169071 master-0 kubenswrapper[28149]: I0313 13:09:25.169013 28149 scope.go:117] "RemoveContainer" containerID="7184cc60811d8467e409a47e8f61f248d7937bdada81fb65d3364f6b462a62ea" Mar 13 13:09:25.169669 master-0 kubenswrapper[28149]: E0313 13:09:25.169627 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7184cc60811d8467e409a47e8f61f248d7937bdada81fb65d3364f6b462a62ea\": container with ID starting with 7184cc60811d8467e409a47e8f61f248d7937bdada81fb65d3364f6b462a62ea not found: ID does not exist" containerID="7184cc60811d8467e409a47e8f61f248d7937bdada81fb65d3364f6b462a62ea" Mar 13 13:09:25.169749 master-0 kubenswrapper[28149]: I0313 13:09:25.169686 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7184cc60811d8467e409a47e8f61f248d7937bdada81fb65d3364f6b462a62ea"} err="failed to get container status \"7184cc60811d8467e409a47e8f61f248d7937bdada81fb65d3364f6b462a62ea\": rpc error: code = NotFound desc = could not find container \"7184cc60811d8467e409a47e8f61f248d7937bdada81fb65d3364f6b462a62ea\": container with ID starting with 7184cc60811d8467e409a47e8f61f248d7937bdada81fb65d3364f6b462a62ea not found: ID does not exist" Mar 13 13:09:25.429396 master-0 kubenswrapper[28149]: I0313 13:09:25.429341 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-q6j75"] Mar 13 13:09:25.452618 master-0 kubenswrapper[28149]: I0313 13:09:25.452405 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-q6j75"] Mar 13 13:09:25.492537 master-0 kubenswrapper[28149]: I0313 13:09:25.490927 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-ljn2q"] Mar 13 13:09:25.521204 master-0 kubenswrapper[28149]: I0313 13:09:25.521117 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-ljn2q"] Mar 13 13:09:26.722110 master-0 kubenswrapper[28149]: I0313 13:09:26.721993 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c02a249-c9cd-4660-b69b-cf03f864c992" path="/var/lib/kubelet/pods/2c02a249-c9cd-4660-b69b-cf03f864c992/volumes" Mar 13 13:09:26.723695 master-0 kubenswrapper[28149]: I0313 13:09:26.723656 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04eff7d-dbbf-4174-8d0e-71046963aca5" path="/var/lib/kubelet/pods/c04eff7d-dbbf-4174-8d0e-71046963aca5/volumes" Mar 13 13:09:29.296024 master-0 kubenswrapper[28149]: I0313 13:09:29.295939 28149 generic.go:334] "Generic (PLEG): container finished" podID="ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4" containerID="0fe2ae1bdeb6eb0a16d9d63e09433b442bba4e56f35273943909f235211ebbef" exitCode=0 Mar 13 13:09:29.296688 master-0 kubenswrapper[28149]: I0313 13:09:29.296021 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4","Type":"ContainerDied","Data":"0fe2ae1bdeb6eb0a16d9d63e09433b442bba4e56f35273943909f235211ebbef"} Mar 13 13:09:29.299973 master-0 kubenswrapper[28149]: I0313 13:09:29.299926 28149 generic.go:334] "Generic (PLEG): container finished" podID="e95954c2-39c2-46f5-8f22-580ba2880939" containerID="0320b22b748045322e24589424897a63dcf309589c46a89ce0b500b988200389" exitCode=0 Mar 13 13:09:29.299973 master-0 kubenswrapper[28149]: I0313 13:09:29.299971 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e95954c2-39c2-46f5-8f22-580ba2880939","Type":"ContainerDied","Data":"0320b22b748045322e24589424897a63dcf309589c46a89ce0b500b988200389"} Mar 13 13:09:29.443465 master-0 kubenswrapper[28149]: I0313 13:09:29.443399 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 13 13:09:30.477523 master-0 kubenswrapper[28149]: I0313 13:09:30.476635 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" Mar 13 13:09:31.278032 master-0 kubenswrapper[28149]: I0313 13:09:31.277966 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" Mar 13 13:09:31.343361 master-0 kubenswrapper[28149]: I0313 13:09:31.343297 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e95954c2-39c2-46f5-8f22-580ba2880939","Type":"ContainerStarted","Data":"6e15a4bc8903e3356d6fd9e0b524ba99a5436809d7e0cdb548f3c52f01515617"} Mar 13 13:09:31.345564 master-0 kubenswrapper[28149]: I0313 13:09:31.345532 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"166fd514-febd-49e6-8e22-bc0faaafc25b","Type":"ContainerStarted","Data":"788ea29662fd6095a4461f47563539fe2ee71886a0f50c949f6fd2a5ba7ce9d6"} Mar 13 13:09:31.347720 master-0 kubenswrapper[28149]: I0313 13:09:31.347678 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"256399c4-c376-4836-9483-76a46694994a","Type":"ContainerStarted","Data":"47bb211703228b48a44f9299f28ba06399de9230dc3c276d0e1003dddad2b1f4"} Mar 13 13:09:31.350411 master-0 kubenswrapper[28149]: I0313 13:09:31.350386 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5qn6j" event={"ID":"4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2","Type":"ContainerStarted","Data":"1a8e1fd4fcfe0de9de6082d7042208691d8e69a361f41e03f2418d4de098c726"} Mar 13 13:09:31.351098 master-0 kubenswrapper[28149]: I0313 13:09:31.351055 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-5qn6j" Mar 13 13:09:31.353662 master-0 kubenswrapper[28149]: I0313 13:09:31.353622 28149 generic.go:334] "Generic (PLEG): container finished" podID="c13ed93f-4fcf-43b8-92ef-f479c5a4af68" containerID="5101cf2d75de14e50c7d2877e48853154d771a03289036caee3c8bd6a4a19d6c" exitCode=0 Mar 13 13:09:31.353759 master-0 kubenswrapper[28149]: I0313 13:09:31.353683 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m2gpt" event={"ID":"c13ed93f-4fcf-43b8-92ef-f479c5a4af68","Type":"ContainerDied","Data":"5101cf2d75de14e50c7d2877e48853154d771a03289036caee3c8bd6a4a19d6c"} Mar 13 13:09:31.435031 master-0 kubenswrapper[28149]: I0313 13:09:31.432298 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"668a51dd-c5b3-4531-b707-39a00bfb5eef","Type":"ContainerStarted","Data":"3537431fdd0e9fbf22bdf2fce2db899caa7332b4f0726076d9cc41f488fe8ee8"} Mar 13 13:09:31.439168 master-0 kubenswrapper[28149]: I0313 13:09:31.437698 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f417d425-c062-40de-a92b-17afe412cfe9","Type":"ContainerStarted","Data":"8526904de7e21ece4a5d704b8892d7888dd6da85962ab4fee729b25f5076d9e0"} Mar 13 13:09:31.441017 master-0 kubenswrapper[28149]: I0313 13:09:31.440873 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ca5ac169-5cb8-4eba-8b4a-f54ecdcdd5c4","Type":"ContainerStarted","Data":"dd8205106a7519ba0de46f79cbe693798aeedd9367340dd740f8e36949a57a25"} Mar 13 13:09:31.453335 master-0 kubenswrapper[28149]: I0313 13:09:31.453277 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-7z4lg"] Mar 13 13:09:31.453629 master-0 kubenswrapper[28149]: I0313 13:09:31.453583 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" podUID="52e53cd9-c831-4aa8-ae1c-5912efb14c13" containerName="dnsmasq-dns" containerID="cri-o://7a8cf01980f37d8c0c62d9d800f89a4d15d4a004104f7dfff41937618741b9fe" gracePeriod=10 Mar 13 13:09:31.473603 master-0 kubenswrapper[28149]: I0313 13:09:31.473491 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=18.886576209 podStartE2EDuration="40.473436477s" podCreationTimestamp="2026-03-13 13:08:51 +0000 UTC" firstStartedPulling="2026-03-13 13:09:01.634388211 +0000 UTC m=+915.287853370" lastFinishedPulling="2026-03-13 13:09:23.221248479 +0000 UTC m=+936.874713638" observedRunningTime="2026-03-13 13:09:31.424658177 +0000 UTC m=+945.078123356" watchObservedRunningTime="2026-03-13 13:09:31.473436477 +0000 UTC m=+945.126901636" Mar 13 13:09:31.602130 master-0 kubenswrapper[28149]: I0313 13:09:31.602055 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5qn6j" podStartSLOduration=23.503007312 podStartE2EDuration="29.60203671s" podCreationTimestamp="2026-03-13 13:09:02 +0000 UTC" firstStartedPulling="2026-03-13 13:09:23.808098489 +0000 UTC m=+937.461563648" lastFinishedPulling="2026-03-13 13:09:29.907127887 +0000 UTC m=+943.560593046" observedRunningTime="2026-03-13 13:09:31.53088986 +0000 UTC m=+945.184355029" watchObservedRunningTime="2026-03-13 13:09:31.60203671 +0000 UTC m=+945.255501859" Mar 13 13:09:31.618514 master-0 kubenswrapper[28149]: I0313 13:09:31.618434 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.055141134 podStartE2EDuration="38.6184146s" podCreationTimestamp="2026-03-13 13:08:53 +0000 UTC" firstStartedPulling="2026-03-13 13:09:07.660500422 +0000 UTC m=+921.313965581" lastFinishedPulling="2026-03-13 13:09:23.223773888 +0000 UTC m=+936.877239047" observedRunningTime="2026-03-13 13:09:31.608013971 +0000 UTC m=+945.261479150" watchObservedRunningTime="2026-03-13 13:09:31.6184146 +0000 UTC m=+945.271879759" Mar 13 13:09:31.776022 master-0 kubenswrapper[28149]: I0313 13:09:31.775854 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 13 13:09:31.776022 master-0 kubenswrapper[28149]: I0313 13:09:31.775917 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 13 13:09:32.151712 master-0 kubenswrapper[28149]: I0313 13:09:32.151667 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" Mar 13 13:09:32.265197 master-0 kubenswrapper[28149]: I0313 13:09:32.265099 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52e53cd9-c831-4aa8-ae1c-5912efb14c13-config\") pod \"52e53cd9-c831-4aa8-ae1c-5912efb14c13\" (UID: \"52e53cd9-c831-4aa8-ae1c-5912efb14c13\") " Mar 13 13:09:32.265447 master-0 kubenswrapper[28149]: I0313 13:09:32.265282 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6cwt\" (UniqueName: \"kubernetes.io/projected/52e53cd9-c831-4aa8-ae1c-5912efb14c13-kube-api-access-g6cwt\") pod \"52e53cd9-c831-4aa8-ae1c-5912efb14c13\" (UID: \"52e53cd9-c831-4aa8-ae1c-5912efb14c13\") " Mar 13 13:09:32.265447 master-0 kubenswrapper[28149]: I0313 13:09:32.265335 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52e53cd9-c831-4aa8-ae1c-5912efb14c13-dns-svc\") pod \"52e53cd9-c831-4aa8-ae1c-5912efb14c13\" (UID: \"52e53cd9-c831-4aa8-ae1c-5912efb14c13\") " Mar 13 13:09:32.272736 master-0 kubenswrapper[28149]: I0313 13:09:32.271890 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52e53cd9-c831-4aa8-ae1c-5912efb14c13-kube-api-access-g6cwt" (OuterVolumeSpecName: "kube-api-access-g6cwt") pod "52e53cd9-c831-4aa8-ae1c-5912efb14c13" (UID: "52e53cd9-c831-4aa8-ae1c-5912efb14c13"). InnerVolumeSpecName "kube-api-access-g6cwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:09:32.320414 master-0 kubenswrapper[28149]: I0313 13:09:32.320195 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52e53cd9-c831-4aa8-ae1c-5912efb14c13-config" (OuterVolumeSpecName: "config") pod "52e53cd9-c831-4aa8-ae1c-5912efb14c13" (UID: "52e53cd9-c831-4aa8-ae1c-5912efb14c13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:32.320414 master-0 kubenswrapper[28149]: I0313 13:09:32.320231 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52e53cd9-c831-4aa8-ae1c-5912efb14c13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52e53cd9-c831-4aa8-ae1c-5912efb14c13" (UID: "52e53cd9-c831-4aa8-ae1c-5912efb14c13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:32.371506 master-0 kubenswrapper[28149]: I0313 13:09:32.371355 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52e53cd9-c831-4aa8-ae1c-5912efb14c13-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:32.371506 master-0 kubenswrapper[28149]: I0313 13:09:32.371399 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6cwt\" (UniqueName: \"kubernetes.io/projected/52e53cd9-c831-4aa8-ae1c-5912efb14c13-kube-api-access-g6cwt\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:32.371506 master-0 kubenswrapper[28149]: I0313 13:09:32.371410 28149 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52e53cd9-c831-4aa8-ae1c-5912efb14c13-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:32.452096 master-0 kubenswrapper[28149]: I0313 13:09:32.452032 28149 generic.go:334] "Generic (PLEG): container finished" podID="52e53cd9-c831-4aa8-ae1c-5912efb14c13" containerID="7a8cf01980f37d8c0c62d9d800f89a4d15d4a004104f7dfff41937618741b9fe" exitCode=0 Mar 13 13:09:32.452441 master-0 kubenswrapper[28149]: I0313 13:09:32.452218 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" Mar 13 13:09:32.452441 master-0 kubenswrapper[28149]: I0313 13:09:32.452223 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" event={"ID":"52e53cd9-c831-4aa8-ae1c-5912efb14c13","Type":"ContainerDied","Data":"7a8cf01980f37d8c0c62d9d800f89a4d15d4a004104f7dfff41937618741b9fe"} Mar 13 13:09:32.452441 master-0 kubenswrapper[28149]: I0313 13:09:32.452299 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-7z4lg" event={"ID":"52e53cd9-c831-4aa8-ae1c-5912efb14c13","Type":"ContainerDied","Data":"13be907302fd34469bb21369545575fcbc5cddc79bb44a9e67486c6042872824"} Mar 13 13:09:32.452441 master-0 kubenswrapper[28149]: I0313 13:09:32.452328 28149 scope.go:117] "RemoveContainer" containerID="7a8cf01980f37d8c0c62d9d800f89a4d15d4a004104f7dfff41937618741b9fe" Mar 13 13:09:32.457058 master-0 kubenswrapper[28149]: I0313 13:09:32.457026 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m2gpt" event={"ID":"c13ed93f-4fcf-43b8-92ef-f479c5a4af68","Type":"ContainerStarted","Data":"ce67b60654c99235da5d7095356065b9a23b25fd45be974ea04aeb023e106aa4"} Mar 13 13:09:32.474741 master-0 kubenswrapper[28149]: I0313 13:09:32.474039 28149 scope.go:117] "RemoveContainer" containerID="0fb8952012e4c8ddf708c6c1f48b964758311e2d3f623ca5a82a0cdd96320024" Mar 13 13:09:32.501006 master-0 kubenswrapper[28149]: I0313 13:09:32.500891 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-7z4lg"] Mar 13 13:09:32.509970 master-0 kubenswrapper[28149]: I0313 13:09:32.509909 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-7z4lg"] Mar 13 13:09:32.530570 master-0 kubenswrapper[28149]: I0313 13:09:32.530495 28149 scope.go:117] "RemoveContainer" containerID="7a8cf01980f37d8c0c62d9d800f89a4d15d4a004104f7dfff41937618741b9fe" Mar 13 13:09:32.531159 master-0 kubenswrapper[28149]: E0313 13:09:32.531057 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a8cf01980f37d8c0c62d9d800f89a4d15d4a004104f7dfff41937618741b9fe\": container with ID starting with 7a8cf01980f37d8c0c62d9d800f89a4d15d4a004104f7dfff41937618741b9fe not found: ID does not exist" containerID="7a8cf01980f37d8c0c62d9d800f89a4d15d4a004104f7dfff41937618741b9fe" Mar 13 13:09:32.531159 master-0 kubenswrapper[28149]: I0313 13:09:32.531088 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a8cf01980f37d8c0c62d9d800f89a4d15d4a004104f7dfff41937618741b9fe"} err="failed to get container status \"7a8cf01980f37d8c0c62d9d800f89a4d15d4a004104f7dfff41937618741b9fe\": rpc error: code = NotFound desc = could not find container \"7a8cf01980f37d8c0c62d9d800f89a4d15d4a004104f7dfff41937618741b9fe\": container with ID starting with 7a8cf01980f37d8c0c62d9d800f89a4d15d4a004104f7dfff41937618741b9fe not found: ID does not exist" Mar 13 13:09:32.531159 master-0 kubenswrapper[28149]: I0313 13:09:32.531119 28149 scope.go:117] "RemoveContainer" containerID="0fb8952012e4c8ddf708c6c1f48b964758311e2d3f623ca5a82a0cdd96320024" Mar 13 13:09:32.531418 master-0 kubenswrapper[28149]: E0313 13:09:32.531383 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fb8952012e4c8ddf708c6c1f48b964758311e2d3f623ca5a82a0cdd96320024\": container with ID starting with 0fb8952012e4c8ddf708c6c1f48b964758311e2d3f623ca5a82a0cdd96320024 not found: ID does not exist" containerID="0fb8952012e4c8ddf708c6c1f48b964758311e2d3f623ca5a82a0cdd96320024" Mar 13 13:09:32.531418 master-0 kubenswrapper[28149]: I0313 13:09:32.531407 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fb8952012e4c8ddf708c6c1f48b964758311e2d3f623ca5a82a0cdd96320024"} err="failed to get container status \"0fb8952012e4c8ddf708c6c1f48b964758311e2d3f623ca5a82a0cdd96320024\": rpc error: code = NotFound desc = could not find container \"0fb8952012e4c8ddf708c6c1f48b964758311e2d3f623ca5a82a0cdd96320024\": container with ID starting with 0fb8952012e4c8ddf708c6c1f48b964758311e2d3f623ca5a82a0cdd96320024 not found: ID does not exist" Mar 13 13:09:32.705614 master-0 kubenswrapper[28149]: I0313 13:09:32.705445 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52e53cd9-c831-4aa8-ae1c-5912efb14c13" path="/var/lib/kubelet/pods/52e53cd9-c831-4aa8-ae1c-5912efb14c13/volumes" Mar 13 13:09:33.477407 master-0 kubenswrapper[28149]: I0313 13:09:33.477357 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m2gpt" event={"ID":"c13ed93f-4fcf-43b8-92ef-f479c5a4af68","Type":"ContainerStarted","Data":"192c54e35203a7031137bcd8811100c16b3f6bff0e3bf13d9ec5151fc5d00219"} Mar 13 13:09:33.477684 master-0 kubenswrapper[28149]: I0313 13:09:33.477426 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:33.477684 master-0 kubenswrapper[28149]: I0313 13:09:33.477449 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:09:33.526381 master-0 kubenswrapper[28149]: I0313 13:09:33.521105 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-m2gpt" podStartSLOduration=26.263967373 podStartE2EDuration="31.521083443s" podCreationTimestamp="2026-03-13 13:09:02 +0000 UTC" firstStartedPulling="2026-03-13 13:09:24.649937545 +0000 UTC m=+938.303402704" lastFinishedPulling="2026-03-13 13:09:29.907053615 +0000 UTC m=+943.560518774" observedRunningTime="2026-03-13 13:09:33.512344658 +0000 UTC m=+947.165809857" watchObservedRunningTime="2026-03-13 13:09:33.521083443 +0000 UTC m=+947.174548612" Mar 13 13:09:35.576459 master-0 kubenswrapper[28149]: I0313 13:09:35.576397 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"166fd514-febd-49e6-8e22-bc0faaafc25b","Type":"ContainerStarted","Data":"16c7447db91c6e6ba078bba31452522fff4f48bd9806898ea2999847d9dd33c2"} Mar 13 13:09:35.578350 master-0 kubenswrapper[28149]: I0313 13:09:35.578269 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"256399c4-c376-4836-9483-76a46694994a","Type":"ContainerStarted","Data":"a56152815f573df50b30f1d118ca4ef4ab03fe57a637eb8f1b29d4c0c9085a17"} Mar 13 13:09:35.584940 master-0 kubenswrapper[28149]: I0313 13:09:35.584880 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:35.617696 master-0 kubenswrapper[28149]: I0313 13:09:35.615545 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=22.527733814 podStartE2EDuration="33.615520526s" podCreationTimestamp="2026-03-13 13:09:02 +0000 UTC" firstStartedPulling="2026-03-13 13:09:24.091382056 +0000 UTC m=+937.744847215" lastFinishedPulling="2026-03-13 13:09:35.179168768 +0000 UTC m=+948.832633927" observedRunningTime="2026-03-13 13:09:35.600581444 +0000 UTC m=+949.254046623" watchObservedRunningTime="2026-03-13 13:09:35.615520526 +0000 UTC m=+949.268985685" Mar 13 13:09:35.632109 master-0 kubenswrapper[28149]: I0313 13:09:35.632002 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.666040387 podStartE2EDuration="33.631973367s" podCreationTimestamp="2026-03-13 13:09:02 +0000 UTC" firstStartedPulling="2026-03-13 13:09:24.217965374 +0000 UTC m=+937.871430533" lastFinishedPulling="2026-03-13 13:09:35.183898344 +0000 UTC m=+948.837363513" observedRunningTime="2026-03-13 13:09:35.62684705 +0000 UTC m=+949.280312219" watchObservedRunningTime="2026-03-13 13:09:35.631973367 +0000 UTC m=+949.285438526" Mar 13 13:09:35.635201 master-0 kubenswrapper[28149]: I0313 13:09:35.635160 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:36.591108 master-0 kubenswrapper[28149]: I0313 13:09:36.591017 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:36.635293 master-0 kubenswrapper[28149]: I0313 13:09:36.635216 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 13 13:09:37.112173 master-0 kubenswrapper[28149]: I0313 13:09:37.111619 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-nxnx5"] Mar 13 13:09:37.112484 master-0 kubenswrapper[28149]: E0313 13:09:37.112283 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52e53cd9-c831-4aa8-ae1c-5912efb14c13" containerName="init" Mar 13 13:09:37.112484 master-0 kubenswrapper[28149]: I0313 13:09:37.112323 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="52e53cd9-c831-4aa8-ae1c-5912efb14c13" containerName="init" Mar 13 13:09:37.112484 master-0 kubenswrapper[28149]: E0313 13:09:37.112371 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04eff7d-dbbf-4174-8d0e-71046963aca5" containerName="init" Mar 13 13:09:37.112484 master-0 kubenswrapper[28149]: I0313 13:09:37.112377 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04eff7d-dbbf-4174-8d0e-71046963aca5" containerName="init" Mar 13 13:09:37.112484 master-0 kubenswrapper[28149]: E0313 13:09:37.112408 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c02a249-c9cd-4660-b69b-cf03f864c992" containerName="init" Mar 13 13:09:37.112484 master-0 kubenswrapper[28149]: I0313 13:09:37.112414 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c02a249-c9cd-4660-b69b-cf03f864c992" containerName="init" Mar 13 13:09:37.112484 master-0 kubenswrapper[28149]: E0313 13:09:37.112439 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52e53cd9-c831-4aa8-ae1c-5912efb14c13" containerName="dnsmasq-dns" Mar 13 13:09:37.112484 master-0 kubenswrapper[28149]: I0313 13:09:37.112447 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="52e53cd9-c831-4aa8-ae1c-5912efb14c13" containerName="dnsmasq-dns" Mar 13 13:09:37.112856 master-0 kubenswrapper[28149]: I0313 13:09:37.112754 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04eff7d-dbbf-4174-8d0e-71046963aca5" containerName="init" Mar 13 13:09:37.112856 master-0 kubenswrapper[28149]: I0313 13:09:37.112805 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c02a249-c9cd-4660-b69b-cf03f864c992" containerName="init" Mar 13 13:09:37.112856 master-0 kubenswrapper[28149]: I0313 13:09:37.112823 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="52e53cd9-c831-4aa8-ae1c-5912efb14c13" containerName="dnsmasq-dns" Mar 13 13:09:37.126197 master-0 kubenswrapper[28149]: I0313 13:09:37.122131 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:37.147162 master-0 kubenswrapper[28149]: I0313 13:09:37.146512 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 13 13:09:37.173741 master-0 kubenswrapper[28149]: I0313 13:09:37.172455 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-nxnx5"] Mar 13 13:09:37.251161 master-0 kubenswrapper[28149]: I0313 13:09:37.248350 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xkpnp"] Mar 13 13:09:37.259204 master-0 kubenswrapper[28149]: I0313 13:09:37.251479 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-dns-svc\") pod \"dnsmasq-dns-5db7b98cb5-nxnx5\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:37.259204 master-0 kubenswrapper[28149]: I0313 13:09:37.251653 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-config\") pod \"dnsmasq-dns-5db7b98cb5-nxnx5\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:37.259204 master-0 kubenswrapper[28149]: I0313 13:09:37.251960 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-ovsdbserver-nb\") pod \"dnsmasq-dns-5db7b98cb5-nxnx5\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:37.259204 master-0 kubenswrapper[28149]: I0313 13:09:37.252003 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcmlq\" (UniqueName: \"kubernetes.io/projected/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-kube-api-access-zcmlq\") pod \"dnsmasq-dns-5db7b98cb5-nxnx5\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:37.285729 master-0 kubenswrapper[28149]: I0313 13:09:37.282674 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.291497 master-0 kubenswrapper[28149]: I0313 13:09:37.291396 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 13 13:09:37.362322 master-0 kubenswrapper[28149]: I0313 13:09:37.360247 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-dns-svc\") pod \"dnsmasq-dns-5db7b98cb5-nxnx5\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:37.362322 master-0 kubenswrapper[28149]: I0313 13:09:37.360311 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-config\") pod \"dnsmasq-dns-5db7b98cb5-nxnx5\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:37.362322 master-0 kubenswrapper[28149]: I0313 13:09:37.360362 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcddb66a-87bd-4787-987b-1259aa3d45e9-config\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.362322 master-0 kubenswrapper[28149]: I0313 13:09:37.360411 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvd8l\" (UniqueName: \"kubernetes.io/projected/dcddb66a-87bd-4787-987b-1259aa3d45e9-kube-api-access-qvd8l\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.362322 master-0 kubenswrapper[28149]: I0313 13:09:37.360438 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/dcddb66a-87bd-4787-987b-1259aa3d45e9-ovs-rundir\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.362322 master-0 kubenswrapper[28149]: I0313 13:09:37.360482 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/dcddb66a-87bd-4787-987b-1259aa3d45e9-ovn-rundir\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.362322 master-0 kubenswrapper[28149]: I0313 13:09:37.360508 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcddb66a-87bd-4787-987b-1259aa3d45e9-combined-ca-bundle\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.362322 master-0 kubenswrapper[28149]: I0313 13:09:37.360541 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-ovsdbserver-nb\") pod \"dnsmasq-dns-5db7b98cb5-nxnx5\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:37.362322 master-0 kubenswrapper[28149]: I0313 13:09:37.360563 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcmlq\" (UniqueName: \"kubernetes.io/projected/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-kube-api-access-zcmlq\") pod \"dnsmasq-dns-5db7b98cb5-nxnx5\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:37.362322 master-0 kubenswrapper[28149]: I0313 13:09:37.360614 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcddb66a-87bd-4787-987b-1259aa3d45e9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.362322 master-0 kubenswrapper[28149]: I0313 13:09:37.361525 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-dns-svc\") pod \"dnsmasq-dns-5db7b98cb5-nxnx5\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:37.362322 master-0 kubenswrapper[28149]: I0313 13:09:37.362153 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-config\") pod \"dnsmasq-dns-5db7b98cb5-nxnx5\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:37.367221 master-0 kubenswrapper[28149]: I0313 13:09:37.364112 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-ovsdbserver-nb\") pod \"dnsmasq-dns-5db7b98cb5-nxnx5\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:37.372865 master-0 kubenswrapper[28149]: I0313 13:09:37.372067 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xkpnp"] Mar 13 13:09:37.561254 master-0 kubenswrapper[28149]: I0313 13:09:37.389533 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-nxnx5"] Mar 13 13:09:37.562334 master-0 kubenswrapper[28149]: I0313 13:09:37.561620 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:37.572164 master-0 kubenswrapper[28149]: E0313 13:09:37.568664 28149 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-zcmlq], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" podUID="b5ab072a-a7f1-4c41-96f9-1d1cb1a09595" Mar 13 13:09:37.572164 master-0 kubenswrapper[28149]: I0313 13:09:37.569530 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:37.572164 master-0 kubenswrapper[28149]: I0313 13:09:37.570109 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcddb66a-87bd-4787-987b-1259aa3d45e9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.572164 master-0 kubenswrapper[28149]: I0313 13:09:37.570224 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcddb66a-87bd-4787-987b-1259aa3d45e9-config\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.572164 master-0 kubenswrapper[28149]: I0313 13:09:37.570273 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvd8l\" (UniqueName: \"kubernetes.io/projected/dcddb66a-87bd-4787-987b-1259aa3d45e9-kube-api-access-qvd8l\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.572164 master-0 kubenswrapper[28149]: I0313 13:09:37.570299 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/dcddb66a-87bd-4787-987b-1259aa3d45e9-ovs-rundir\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.572164 master-0 kubenswrapper[28149]: I0313 13:09:37.570337 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/dcddb66a-87bd-4787-987b-1259aa3d45e9-ovn-rundir\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.572164 master-0 kubenswrapper[28149]: I0313 13:09:37.570361 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcddb66a-87bd-4787-987b-1259aa3d45e9-combined-ca-bundle\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.573296 master-0 kubenswrapper[28149]: I0313 13:09:37.573247 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/dcddb66a-87bd-4787-987b-1259aa3d45e9-ovs-rundir\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.573650 master-0 kubenswrapper[28149]: I0313 13:09:37.573616 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcddb66a-87bd-4787-987b-1259aa3d45e9-combined-ca-bundle\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.573736 master-0 kubenswrapper[28149]: I0313 13:09:37.573724 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/dcddb66a-87bd-4787-987b-1259aa3d45e9-ovn-rundir\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.574315 master-0 kubenswrapper[28149]: I0313 13:09:37.574288 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcddb66a-87bd-4787-987b-1259aa3d45e9-config\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.583902 master-0 kubenswrapper[28149]: I0313 13:09:37.579953 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcddb66a-87bd-4787-987b-1259aa3d45e9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.599161 master-0 kubenswrapper[28149]: I0313 13:09:37.597265 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-745897cbdc-ncpns"] Mar 13 13:09:37.619057 master-0 kubenswrapper[28149]: I0313 13:09:37.619006 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvd8l\" (UniqueName: \"kubernetes.io/projected/dcddb66a-87bd-4787-987b-1259aa3d45e9-kube-api-access-qvd8l\") pod \"ovn-controller-metrics-xkpnp\" (UID: \"dcddb66a-87bd-4787-987b-1259aa3d45e9\") " pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.619778 master-0 kubenswrapper[28149]: I0313 13:09:37.619740 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcmlq\" (UniqueName: \"kubernetes.io/projected/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-kube-api-access-zcmlq\") pod \"dnsmasq-dns-5db7b98cb5-nxnx5\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:37.645169 master-0 kubenswrapper[28149]: I0313 13:09:37.643452 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:37.664233 master-0 kubenswrapper[28149]: I0313 13:09:37.663296 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:37.664233 master-0 kubenswrapper[28149]: I0313 13:09:37.663349 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745897cbdc-ncpns"] Mar 13 13:09:37.664233 master-0 kubenswrapper[28149]: I0313 13:09:37.663476 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:37.674063 master-0 kubenswrapper[28149]: I0313 13:09:37.670878 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xkpnp" Mar 13 13:09:37.674063 master-0 kubenswrapper[28149]: I0313 13:09:37.672452 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-ovsdbserver-nb\") pod \"dnsmasq-dns-745897cbdc-ncpns\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:37.674063 master-0 kubenswrapper[28149]: I0313 13:09:37.672514 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64kxc\" (UniqueName: \"kubernetes.io/projected/727369ba-25ef-4b46-a4f3-f7e9775b3057-kube-api-access-64kxc\") pod \"dnsmasq-dns-745897cbdc-ncpns\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:37.674063 master-0 kubenswrapper[28149]: I0313 13:09:37.672583 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-config\") pod \"dnsmasq-dns-745897cbdc-ncpns\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:37.674063 master-0 kubenswrapper[28149]: I0313 13:09:37.672612 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-dns-svc\") pod \"dnsmasq-dns-745897cbdc-ncpns\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:37.706837 master-0 kubenswrapper[28149]: I0313 13:09:37.703130 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 13 13:09:37.743473 master-0 kubenswrapper[28149]: I0313 13:09:37.743428 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:37.777524 master-0 kubenswrapper[28149]: I0313 13:09:37.777053 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-ovsdbserver-nb\") pod \"dnsmasq-dns-745897cbdc-ncpns\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:37.777524 master-0 kubenswrapper[28149]: I0313 13:09:37.777240 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64kxc\" (UniqueName: \"kubernetes.io/projected/727369ba-25ef-4b46-a4f3-f7e9775b3057-kube-api-access-64kxc\") pod \"dnsmasq-dns-745897cbdc-ncpns\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:37.777524 master-0 kubenswrapper[28149]: I0313 13:09:37.777289 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-dns-svc\") pod \"dnsmasq-dns-745897cbdc-ncpns\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:37.777524 master-0 kubenswrapper[28149]: I0313 13:09:37.777346 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-config\") pod \"dnsmasq-dns-745897cbdc-ncpns\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:37.778789 master-0 kubenswrapper[28149]: I0313 13:09:37.778742 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-ovsdbserver-nb\") pod \"dnsmasq-dns-745897cbdc-ncpns\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:37.779236 master-0 kubenswrapper[28149]: I0313 13:09:37.779193 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-dns-svc\") pod \"dnsmasq-dns-745897cbdc-ncpns\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:37.779868 master-0 kubenswrapper[28149]: I0313 13:09:37.779836 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-config\") pod \"dnsmasq-dns-745897cbdc-ncpns\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:37.814475 master-0 kubenswrapper[28149]: I0313 13:09:37.814341 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64kxc\" (UniqueName: \"kubernetes.io/projected/727369ba-25ef-4b46-a4f3-f7e9775b3057-kube-api-access-64kxc\") pod \"dnsmasq-dns-745897cbdc-ncpns\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:37.850893 master-0 kubenswrapper[28149]: I0313 13:09:37.850568 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:37.886160 master-0 kubenswrapper[28149]: I0313 13:09:37.881462 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-config\") pod \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " Mar 13 13:09:37.886160 master-0 kubenswrapper[28149]: I0313 13:09:37.881654 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-dns-svc\") pod \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " Mar 13 13:09:37.886160 master-0 kubenswrapper[28149]: I0313 13:09:37.881749 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-ovsdbserver-nb\") pod \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " Mar 13 13:09:37.886160 master-0 kubenswrapper[28149]: I0313 13:09:37.881946 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcmlq\" (UniqueName: \"kubernetes.io/projected/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-kube-api-access-zcmlq\") pod \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\" (UID: \"b5ab072a-a7f1-4c41-96f9-1d1cb1a09595\") " Mar 13 13:09:37.886160 master-0 kubenswrapper[28149]: I0313 13:09:37.883490 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-config" (OuterVolumeSpecName: "config") pod "b5ab072a-a7f1-4c41-96f9-1d1cb1a09595" (UID: "b5ab072a-a7f1-4c41-96f9-1d1cb1a09595"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:37.886160 master-0 kubenswrapper[28149]: I0313 13:09:37.883625 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b5ab072a-a7f1-4c41-96f9-1d1cb1a09595" (UID: "b5ab072a-a7f1-4c41-96f9-1d1cb1a09595"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:37.886160 master-0 kubenswrapper[28149]: I0313 13:09:37.884082 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b5ab072a-a7f1-4c41-96f9-1d1cb1a09595" (UID: "b5ab072a-a7f1-4c41-96f9-1d1cb1a09595"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:37.898605 master-0 kubenswrapper[28149]: I0313 13:09:37.887974 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-kube-api-access-zcmlq" (OuterVolumeSpecName: "kube-api-access-zcmlq") pod "b5ab072a-a7f1-4c41-96f9-1d1cb1a09595" (UID: "b5ab072a-a7f1-4c41-96f9-1d1cb1a09595"). InnerVolumeSpecName "kube-api-access-zcmlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:09:37.987024 master-0 kubenswrapper[28149]: I0313 13:09:37.986964 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:37.987292 master-0 kubenswrapper[28149]: I0313 13:09:37.987279 28149 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:37.987370 master-0 kubenswrapper[28149]: I0313 13:09:37.987355 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:37.987454 master-0 kubenswrapper[28149]: I0313 13:09:37.987444 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcmlq\" (UniqueName: \"kubernetes.io/projected/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595-kube-api-access-zcmlq\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:38.104247 master-0 kubenswrapper[28149]: I0313 13:09:38.104187 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745897cbdc-ncpns"] Mar 13 13:09:38.151186 master-0 kubenswrapper[28149]: I0313 13:09:38.149345 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 13 13:09:38.151186 master-0 kubenswrapper[28149]: I0313 13:09:38.151182 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 13 13:09:38.156937 master-0 kubenswrapper[28149]: I0313 13:09:38.156891 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 13 13:09:38.158279 master-0 kubenswrapper[28149]: I0313 13:09:38.156929 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 13 13:09:38.158556 master-0 kubenswrapper[28149]: I0313 13:09:38.156988 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 13 13:09:38.180383 master-0 kubenswrapper[28149]: I0313 13:09:38.177944 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 13 13:09:38.269615 master-0 kubenswrapper[28149]: I0313 13:09:38.255257 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-xtxz5"] Mar 13 13:09:38.269615 master-0 kubenswrapper[28149]: I0313 13:09:38.261375 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.269615 master-0 kubenswrapper[28149]: I0313 13:09:38.267225 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-xtxz5"] Mar 13 13:09:38.271757 master-0 kubenswrapper[28149]: I0313 13:09:38.271692 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 13 13:09:38.349391 master-0 kubenswrapper[28149]: I0313 13:09:38.343628 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd7dbee4-5199-4cc3-897e-64897e098c34-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.349391 master-0 kubenswrapper[28149]: I0313 13:09:38.343680 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bd7dbee4-5199-4cc3-897e-64897e098c34-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.349391 master-0 kubenswrapper[28149]: I0313 13:09:38.343702 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzqbb\" (UniqueName: \"kubernetes.io/projected/bd7dbee4-5199-4cc3-897e-64897e098c34-kube-api-access-bzqbb\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.349391 master-0 kubenswrapper[28149]: I0313 13:09:38.343746 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd7dbee4-5199-4cc3-897e-64897e098c34-config\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.349391 master-0 kubenswrapper[28149]: I0313 13:09:38.344093 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7dbee4-5199-4cc3-897e-64897e098c34-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.349391 master-0 kubenswrapper[28149]: I0313 13:09:38.344188 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd7dbee4-5199-4cc3-897e-64897e098c34-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.349391 master-0 kubenswrapper[28149]: I0313 13:09:38.344315 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd7dbee4-5199-4cc3-897e-64897e098c34-scripts\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.456639 master-0 kubenswrapper[28149]: I0313 13:09:38.451499 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7dbee4-5199-4cc3-897e-64897e098c34-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.456639 master-0 kubenswrapper[28149]: I0313 13:09:38.451558 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd7dbee4-5199-4cc3-897e-64897e098c34-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.456639 master-0 kubenswrapper[28149]: I0313 13:09:38.451619 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-dns-svc\") pod \"dnsmasq-dns-5b8649b7f9-xtxz5\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.456639 master-0 kubenswrapper[28149]: I0313 13:09:38.451682 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd7dbee4-5199-4cc3-897e-64897e098c34-scripts\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.456639 master-0 kubenswrapper[28149]: I0313 13:09:38.451799 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8649b7f9-xtxz5\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.456639 master-0 kubenswrapper[28149]: I0313 13:09:38.451826 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd7dbee4-5199-4cc3-897e-64897e098c34-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.456639 master-0 kubenswrapper[28149]: I0313 13:09:38.451848 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bd7dbee4-5199-4cc3-897e-64897e098c34-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.456639 master-0 kubenswrapper[28149]: I0313 13:09:38.451866 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzqbb\" (UniqueName: \"kubernetes.io/projected/bd7dbee4-5199-4cc3-897e-64897e098c34-kube-api-access-bzqbb\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.456639 master-0 kubenswrapper[28149]: I0313 13:09:38.451886 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv77j\" (UniqueName: \"kubernetes.io/projected/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-kube-api-access-nv77j\") pod \"dnsmasq-dns-5b8649b7f9-xtxz5\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.456639 master-0 kubenswrapper[28149]: I0313 13:09:38.451929 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd7dbee4-5199-4cc3-897e-64897e098c34-config\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.456639 master-0 kubenswrapper[28149]: I0313 13:09:38.451953 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8649b7f9-xtxz5\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.456639 master-0 kubenswrapper[28149]: I0313 13:09:38.453291 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-config\") pod \"dnsmasq-dns-5b8649b7f9-xtxz5\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.457387 master-0 kubenswrapper[28149]: I0313 13:09:38.457094 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7dbee4-5199-4cc3-897e-64897e098c34-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.465027 master-0 kubenswrapper[28149]: I0313 13:09:38.464991 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd7dbee4-5199-4cc3-897e-64897e098c34-scripts\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.465555 master-0 kubenswrapper[28149]: I0313 13:09:38.465532 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd7dbee4-5199-4cc3-897e-64897e098c34-config\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.465916 master-0 kubenswrapper[28149]: I0313 13:09:38.465836 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bd7dbee4-5199-4cc3-897e-64897e098c34-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.466817 master-0 kubenswrapper[28149]: I0313 13:09:38.466742 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xkpnp"] Mar 13 13:09:38.475581 master-0 kubenswrapper[28149]: I0313 13:09:38.469107 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd7dbee4-5199-4cc3-897e-64897e098c34-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.475581 master-0 kubenswrapper[28149]: I0313 13:09:38.469203 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd7dbee4-5199-4cc3-897e-64897e098c34-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.516821 master-0 kubenswrapper[28149]: I0313 13:09:38.516714 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzqbb\" (UniqueName: \"kubernetes.io/projected/bd7dbee4-5199-4cc3-897e-64897e098c34-kube-api-access-bzqbb\") pod \"ovn-northd-0\" (UID: \"bd7dbee4-5199-4cc3-897e-64897e098c34\") " pod="openstack/ovn-northd-0" Mar 13 13:09:38.550913 master-0 kubenswrapper[28149]: I0313 13:09:38.550857 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 13 13:09:38.557374 master-0 kubenswrapper[28149]: I0313 13:09:38.557319 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-config\") pod \"dnsmasq-dns-5b8649b7f9-xtxz5\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.558009 master-0 kubenswrapper[28149]: I0313 13:09:38.557985 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-dns-svc\") pod \"dnsmasq-dns-5b8649b7f9-xtxz5\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.558247 master-0 kubenswrapper[28149]: I0313 13:09:38.558205 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8649b7f9-xtxz5\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.558448 master-0 kubenswrapper[28149]: I0313 13:09:38.558428 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv77j\" (UniqueName: \"kubernetes.io/projected/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-kube-api-access-nv77j\") pod \"dnsmasq-dns-5b8649b7f9-xtxz5\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.558612 master-0 kubenswrapper[28149]: I0313 13:09:38.558594 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8649b7f9-xtxz5\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.559990 master-0 kubenswrapper[28149]: I0313 13:09:38.559853 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8649b7f9-xtxz5\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.572643 master-0 kubenswrapper[28149]: I0313 13:09:38.564094 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-config\") pod \"dnsmasq-dns-5b8649b7f9-xtxz5\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.572643 master-0 kubenswrapper[28149]: I0313 13:09:38.564422 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8649b7f9-xtxz5\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.580197 master-0 kubenswrapper[28149]: I0313 13:09:38.580065 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-dns-svc\") pod \"dnsmasq-dns-5b8649b7f9-xtxz5\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.588950 master-0 kubenswrapper[28149]: I0313 13:09:38.588889 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv77j\" (UniqueName: \"kubernetes.io/projected/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-kube-api-access-nv77j\") pod \"dnsmasq-dns-5b8649b7f9-xtxz5\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.654270 master-0 kubenswrapper[28149]: I0313 13:09:38.654006 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:38.667891 master-0 kubenswrapper[28149]: I0313 13:09:38.667845 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745897cbdc-ncpns"] Mar 13 13:09:38.672233 master-0 kubenswrapper[28149]: I0313 13:09:38.671560 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db7b98cb5-nxnx5" Mar 13 13:09:38.672233 master-0 kubenswrapper[28149]: I0313 13:09:38.672055 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xkpnp" event={"ID":"dcddb66a-87bd-4787-987b-1259aa3d45e9","Type":"ContainerStarted","Data":"3156cbeb4fffd4388eb2bb9aeb8c3988558a6fe1caac40a16f98e0ab6ed3fb01"} Mar 13 13:09:38.933473 master-0 kubenswrapper[28149]: I0313 13:09:38.933410 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-nxnx5"] Mar 13 13:09:38.943675 master-0 kubenswrapper[28149]: I0313 13:09:38.943614 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-nxnx5"] Mar 13 13:09:39.214918 master-0 kubenswrapper[28149]: I0313 13:09:39.214808 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 13 13:09:39.235533 master-0 kubenswrapper[28149]: W0313 13:09:39.235067 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd7dbee4_5199_4cc3_897e_64897e098c34.slice/crio-b2766ea7e7eadea7c690be8273650ebcd1bfb4991eca0b69eaefcf8f8175a30d WatchSource:0}: Error finding container b2766ea7e7eadea7c690be8273650ebcd1bfb4991eca0b69eaefcf8f8175a30d: Status 404 returned error can't find the container with id b2766ea7e7eadea7c690be8273650ebcd1bfb4991eca0b69eaefcf8f8175a30d Mar 13 13:09:39.398424 master-0 kubenswrapper[28149]: I0313 13:09:39.397544 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-xtxz5"] Mar 13 13:09:39.635454 master-0 kubenswrapper[28149]: I0313 13:09:39.635347 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 13 13:09:39.655170 master-0 kubenswrapper[28149]: I0313 13:09:39.655024 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 13 13:09:39.666088 master-0 kubenswrapper[28149]: I0313 13:09:39.658273 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 13 13:09:39.666088 master-0 kubenswrapper[28149]: I0313 13:09:39.658464 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 13 13:09:39.666088 master-0 kubenswrapper[28149]: I0313 13:09:39.658648 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 13 13:09:39.671954 master-0 kubenswrapper[28149]: I0313 13:09:39.671886 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 13 13:09:39.723444 master-0 kubenswrapper[28149]: I0313 13:09:39.723388 28149 generic.go:334] "Generic (PLEG): container finished" podID="727369ba-25ef-4b46-a4f3-f7e9775b3057" containerID="716c683d35f9d6f57c41f027dee48398f3df17e9a1db29608946a4ad0faa3a81" exitCode=0 Mar 13 13:09:39.724120 master-0 kubenswrapper[28149]: I0313 13:09:39.723469 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745897cbdc-ncpns" event={"ID":"727369ba-25ef-4b46-a4f3-f7e9775b3057","Type":"ContainerDied","Data":"716c683d35f9d6f57c41f027dee48398f3df17e9a1db29608946a4ad0faa3a81"} Mar 13 13:09:39.724120 master-0 kubenswrapper[28149]: I0313 13:09:39.723500 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745897cbdc-ncpns" event={"ID":"727369ba-25ef-4b46-a4f3-f7e9775b3057","Type":"ContainerStarted","Data":"4e158fc470bcbac1b4cb56e0e3134fcfec6c043cadaeed164a9325d9703e28fe"} Mar 13 13:09:39.740409 master-0 kubenswrapper[28149]: I0313 13:09:39.740325 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-lock\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:39.740682 master-0 kubenswrapper[28149]: I0313 13:09:39.740566 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:39.740682 master-0 kubenswrapper[28149]: I0313 13:09:39.740619 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9z22\" (UniqueName: \"kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-kube-api-access-p9z22\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:39.740770 master-0 kubenswrapper[28149]: I0313 13:09:39.740706 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-cache\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:39.740836 master-0 kubenswrapper[28149]: I0313 13:09:39.740776 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77fa2847-6b44-4e76-b53c-20530e0d5762\" (UniqueName: \"kubernetes.io/csi/topolvm.io^94e8267b-2dfa-472b-8e63-158d6d42564c\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:39.740836 master-0 kubenswrapper[28149]: I0313 13:09:39.740803 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:39.744361 master-0 kubenswrapper[28149]: I0313 13:09:39.744311 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xkpnp" event={"ID":"dcddb66a-87bd-4787-987b-1259aa3d45e9","Type":"ContainerStarted","Data":"045aa27169661e31e993d2d0a605c9e88d54c1fbadf2d8bb6e4ef05bd0a1cf3a"} Mar 13 13:09:39.745461 master-0 kubenswrapper[28149]: I0313 13:09:39.745423 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bd7dbee4-5199-4cc3-897e-64897e098c34","Type":"ContainerStarted","Data":"b2766ea7e7eadea7c690be8273650ebcd1bfb4991eca0b69eaefcf8f8175a30d"} Mar 13 13:09:39.747711 master-0 kubenswrapper[28149]: I0313 13:09:39.747670 28149 generic.go:334] "Generic (PLEG): container finished" podID="51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6" containerID="a6ab5ac5fdb36195691e3ca7c0ab96dc11ffd0f231c79551a11396dd516e0620" exitCode=0 Mar 13 13:09:39.747852 master-0 kubenswrapper[28149]: I0313 13:09:39.747824 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" event={"ID":"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6","Type":"ContainerDied","Data":"a6ab5ac5fdb36195691e3ca7c0ab96dc11ffd0f231c79551a11396dd516e0620"} Mar 13 13:09:39.747908 master-0 kubenswrapper[28149]: I0313 13:09:39.747882 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" event={"ID":"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6","Type":"ContainerStarted","Data":"3e9812eaeeff642a692615495d6058bf4bbc4b86eaad967aa5872454af7101ad"} Mar 13 13:09:39.846163 master-0 kubenswrapper[28149]: I0313 13:09:39.844664 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-cache\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:39.846405 master-0 kubenswrapper[28149]: I0313 13:09:39.846288 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-77fa2847-6b44-4e76-b53c-20530e0d5762\" (UniqueName: \"kubernetes.io/csi/topolvm.io^94e8267b-2dfa-472b-8e63-158d6d42564c\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:39.846405 master-0 kubenswrapper[28149]: I0313 13:09:39.846356 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:39.846684 master-0 kubenswrapper[28149]: I0313 13:09:39.846542 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-lock\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:39.847267 master-0 kubenswrapper[28149]: I0313 13:09:39.847051 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:39.847670 master-0 kubenswrapper[28149]: I0313 13:09:39.845337 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-cache\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:39.849553 master-0 kubenswrapper[28149]: I0313 13:09:39.849482 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xkpnp" podStartSLOduration=2.84945561 podStartE2EDuration="2.84945561s" podCreationTimestamp="2026-03-13 13:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:09:39.818715825 +0000 UTC m=+953.472180994" watchObservedRunningTime="2026-03-13 13:09:39.84945561 +0000 UTC m=+953.502920769" Mar 13 13:09:39.853268 master-0 kubenswrapper[28149]: I0313 13:09:39.853228 28149 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 13:09:39.853404 master-0 kubenswrapper[28149]: I0313 13:09:39.853282 28149 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-77fa2847-6b44-4e76-b53c-20530e0d5762\" (UniqueName: \"kubernetes.io/csi/topolvm.io^94e8267b-2dfa-472b-8e63-158d6d42564c\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/560f956b82e6dcfa8eab3defc6471aa8e90e8eea19782d90c0b0e37bd7c3c76b/globalmount\"" pod="openstack/swift-storage-0" Mar 13 13:09:39.870869 master-0 kubenswrapper[28149]: E0313 13:09:39.865492 28149 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 13:09:39.870869 master-0 kubenswrapper[28149]: E0313 13:09:39.865537 28149 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 13:09:39.870869 master-0 kubenswrapper[28149]: E0313 13:09:39.865627 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift podName:0e1ffcf0-0cdc-4a69-884c-47edbe0caf50 nodeName:}" failed. No retries permitted until 2026-03-13 13:09:40.365587643 +0000 UTC m=+954.019052812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift") pod "swift-storage-0" (UID: "0e1ffcf0-0cdc-4a69-884c-47edbe0caf50") : configmap "swift-ring-files" not found Mar 13 13:09:39.870869 master-0 kubenswrapper[28149]: I0313 13:09:39.866271 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-lock\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:39.870869 master-0 kubenswrapper[28149]: I0313 13:09:39.870291 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9z22\" (UniqueName: \"kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-kube-api-access-p9z22\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:39.880369 master-0 kubenswrapper[28149]: I0313 13:09:39.880162 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:39.916369 master-0 kubenswrapper[28149]: I0313 13:09:39.916258 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9z22\" (UniqueName: \"kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-kube-api-access-p9z22\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:40.137702 master-0 kubenswrapper[28149]: I0313 13:09:40.135157 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 13 13:09:40.399164 master-0 kubenswrapper[28149]: I0313 13:09:40.397348 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:40.399164 master-0 kubenswrapper[28149]: E0313 13:09:40.397566 28149 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 13:09:40.399164 master-0 kubenswrapper[28149]: E0313 13:09:40.397585 28149 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 13:09:40.399164 master-0 kubenswrapper[28149]: E0313 13:09:40.397635 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift podName:0e1ffcf0-0cdc-4a69-884c-47edbe0caf50 nodeName:}" failed. No retries permitted until 2026-03-13 13:09:41.39761883 +0000 UTC m=+955.051083989 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift") pod "swift-storage-0" (UID: "0e1ffcf0-0cdc-4a69-884c-47edbe0caf50") : configmap "swift-ring-files" not found Mar 13 13:09:40.448159 master-0 kubenswrapper[28149]: I0313 13:09:40.447078 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:40.498442 master-0 kubenswrapper[28149]: I0313 13:09:40.498390 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-config\") pod \"727369ba-25ef-4b46-a4f3-f7e9775b3057\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " Mar 13 13:09:40.498733 master-0 kubenswrapper[28149]: I0313 13:09:40.498654 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64kxc\" (UniqueName: \"kubernetes.io/projected/727369ba-25ef-4b46-a4f3-f7e9775b3057-kube-api-access-64kxc\") pod \"727369ba-25ef-4b46-a4f3-f7e9775b3057\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " Mar 13 13:09:40.498920 master-0 kubenswrapper[28149]: I0313 13:09:40.498895 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-dns-svc\") pod \"727369ba-25ef-4b46-a4f3-f7e9775b3057\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " Mar 13 13:09:40.499074 master-0 kubenswrapper[28149]: I0313 13:09:40.499050 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-ovsdbserver-nb\") pod \"727369ba-25ef-4b46-a4f3-f7e9775b3057\" (UID: \"727369ba-25ef-4b46-a4f3-f7e9775b3057\") " Mar 13 13:09:40.505058 master-0 kubenswrapper[28149]: I0313 13:09:40.504837 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727369ba-25ef-4b46-a4f3-f7e9775b3057-kube-api-access-64kxc" (OuterVolumeSpecName: "kube-api-access-64kxc") pod "727369ba-25ef-4b46-a4f3-f7e9775b3057" (UID: "727369ba-25ef-4b46-a4f3-f7e9775b3057"). InnerVolumeSpecName "kube-api-access-64kxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:09:40.526671 master-0 kubenswrapper[28149]: I0313 13:09:40.526606 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 13 13:09:40.530153 master-0 kubenswrapper[28149]: I0313 13:09:40.529621 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-config" (OuterVolumeSpecName: "config") pod "727369ba-25ef-4b46-a4f3-f7e9775b3057" (UID: "727369ba-25ef-4b46-a4f3-f7e9775b3057"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:40.553016 master-0 kubenswrapper[28149]: I0313 13:09:40.552864 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "727369ba-25ef-4b46-a4f3-f7e9775b3057" (UID: "727369ba-25ef-4b46-a4f3-f7e9775b3057"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:40.606762 master-0 kubenswrapper[28149]: I0313 13:09:40.606700 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "727369ba-25ef-4b46-a4f3-f7e9775b3057" (UID: "727369ba-25ef-4b46-a4f3-f7e9775b3057"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:40.608863 master-0 kubenswrapper[28149]: I0313 13:09:40.608677 28149 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:40.608863 master-0 kubenswrapper[28149]: I0313 13:09:40.608704 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:40.608863 master-0 kubenswrapper[28149]: I0313 13:09:40.608714 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/727369ba-25ef-4b46-a4f3-f7e9775b3057-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:40.608863 master-0 kubenswrapper[28149]: I0313 13:09:40.608727 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64kxc\" (UniqueName: \"kubernetes.io/projected/727369ba-25ef-4b46-a4f3-f7e9775b3057-kube-api-access-64kxc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:40.656184 master-0 kubenswrapper[28149]: I0313 13:09:40.650759 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 13 13:09:40.656184 master-0 kubenswrapper[28149]: I0313 13:09:40.650822 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 13 13:09:40.706214 master-0 kubenswrapper[28149]: I0313 13:09:40.705335 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5ab072a-a7f1-4c41-96f9-1d1cb1a09595" path="/var/lib/kubelet/pods/b5ab072a-a7f1-4c41-96f9-1d1cb1a09595/volumes" Mar 13 13:09:40.736555 master-0 kubenswrapper[28149]: I0313 13:09:40.736514 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 13 13:09:40.774674 master-0 kubenswrapper[28149]: I0313 13:09:40.774615 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" event={"ID":"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6","Type":"ContainerStarted","Data":"ca1e8d8a73405caa71e6289f4a9087d7c82d522efa8916691873a3333d2d6dde"} Mar 13 13:09:40.774870 master-0 kubenswrapper[28149]: I0313 13:09:40.774714 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:40.782038 master-0 kubenswrapper[28149]: I0313 13:09:40.781939 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745897cbdc-ncpns" event={"ID":"727369ba-25ef-4b46-a4f3-f7e9775b3057","Type":"ContainerDied","Data":"4e158fc470bcbac1b4cb56e0e3134fcfec6c043cadaeed164a9325d9703e28fe"} Mar 13 13:09:40.782038 master-0 kubenswrapper[28149]: I0313 13:09:40.782011 28149 scope.go:117] "RemoveContainer" containerID="716c683d35f9d6f57c41f027dee48398f3df17e9a1db29608946a4ad0faa3a81" Mar 13 13:09:40.782314 master-0 kubenswrapper[28149]: I0313 13:09:40.782207 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745897cbdc-ncpns" Mar 13 13:09:40.814847 master-0 kubenswrapper[28149]: I0313 13:09:40.814660 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" podStartSLOduration=2.814636808 podStartE2EDuration="2.814636808s" podCreationTimestamp="2026-03-13 13:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:09:40.806342555 +0000 UTC m=+954.459807714" watchObservedRunningTime="2026-03-13 13:09:40.814636808 +0000 UTC m=+954.468101967" Mar 13 13:09:40.906676 master-0 kubenswrapper[28149]: I0313 13:09:40.906596 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745897cbdc-ncpns"] Mar 13 13:09:40.908101 master-0 kubenswrapper[28149]: I0313 13:09:40.907261 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 13 13:09:40.918330 master-0 kubenswrapper[28149]: I0313 13:09:40.918291 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-745897cbdc-ncpns"] Mar 13 13:09:41.378901 master-0 kubenswrapper[28149]: I0313 13:09:41.378804 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-77fa2847-6b44-4e76-b53c-20530e0d5762\" (UniqueName: \"kubernetes.io/csi/topolvm.io^94e8267b-2dfa-472b-8e63-158d6d42564c\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:41.437726 master-0 kubenswrapper[28149]: I0313 13:09:41.437510 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:41.437842 master-0 kubenswrapper[28149]: E0313 13:09:41.437685 28149 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 13:09:41.437842 master-0 kubenswrapper[28149]: E0313 13:09:41.437806 28149 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 13:09:41.437948 master-0 kubenswrapper[28149]: E0313 13:09:41.437860 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift podName:0e1ffcf0-0cdc-4a69-884c-47edbe0caf50 nodeName:}" failed. No retries permitted until 2026-03-13 13:09:43.437837743 +0000 UTC m=+957.091302912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift") pod "swift-storage-0" (UID: "0e1ffcf0-0cdc-4a69-884c-47edbe0caf50") : configmap "swift-ring-files" not found Mar 13 13:09:41.806006 master-0 kubenswrapper[28149]: I0313 13:09:41.804608 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bd7dbee4-5199-4cc3-897e-64897e098c34","Type":"ContainerStarted","Data":"1b9edc418527478b7ce07d7128407498cbc9fd50f3708476032fa86a4d0f006e"} Mar 13 13:09:41.806006 master-0 kubenswrapper[28149]: I0313 13:09:41.804679 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bd7dbee4-5199-4cc3-897e-64897e098c34","Type":"ContainerStarted","Data":"7270e1da9540b545da5b1338d57e599672efff399a411e7d99af2ae491800fac"} Mar 13 13:09:41.856590 master-0 kubenswrapper[28149]: I0313 13:09:41.856498 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mzb78"] Mar 13 13:09:41.857258 master-0 kubenswrapper[28149]: E0313 13:09:41.857225 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727369ba-25ef-4b46-a4f3-f7e9775b3057" containerName="init" Mar 13 13:09:41.857258 master-0 kubenswrapper[28149]: I0313 13:09:41.857251 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="727369ba-25ef-4b46-a4f3-f7e9775b3057" containerName="init" Mar 13 13:09:41.857621 master-0 kubenswrapper[28149]: I0313 13:09:41.857593 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="727369ba-25ef-4b46-a4f3-f7e9775b3057" containerName="init" Mar 13 13:09:41.858572 master-0 kubenswrapper[28149]: I0313 13:09:41.858539 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzb78" Mar 13 13:09:41.868518 master-0 kubenswrapper[28149]: I0313 13:09:41.868445 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 13 13:09:41.879995 master-0 kubenswrapper[28149]: I0313 13:09:41.879950 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mzb78"] Mar 13 13:09:41.881678 master-0 kubenswrapper[28149]: I0313 13:09:41.881601 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.02413248 podStartE2EDuration="3.881577719s" podCreationTimestamp="2026-03-13 13:09:38 +0000 UTC" firstStartedPulling="2026-03-13 13:09:39.237418614 +0000 UTC m=+952.890883773" lastFinishedPulling="2026-03-13 13:09:41.094863853 +0000 UTC m=+954.748329012" observedRunningTime="2026-03-13 13:09:41.871808267 +0000 UTC m=+955.525273436" watchObservedRunningTime="2026-03-13 13:09:41.881577719 +0000 UTC m=+955.535042878" Mar 13 13:09:42.058971 master-0 kubenswrapper[28149]: I0313 13:09:42.058885 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2f780e9-d28c-478d-9a91-6c87f1ba7d2c-operator-scripts\") pod \"root-account-create-update-mzb78\" (UID: \"d2f780e9-d28c-478d-9a91-6c87f1ba7d2c\") " pod="openstack/root-account-create-update-mzb78" Mar 13 13:09:42.059272 master-0 kubenswrapper[28149]: I0313 13:09:42.059080 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58b4f\" (UniqueName: \"kubernetes.io/projected/d2f780e9-d28c-478d-9a91-6c87f1ba7d2c-kube-api-access-58b4f\") pod \"root-account-create-update-mzb78\" (UID: \"d2f780e9-d28c-478d-9a91-6c87f1ba7d2c\") " pod="openstack/root-account-create-update-mzb78" Mar 13 13:09:42.161930 master-0 kubenswrapper[28149]: I0313 13:09:42.160810 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2f780e9-d28c-478d-9a91-6c87f1ba7d2c-operator-scripts\") pod \"root-account-create-update-mzb78\" (UID: \"d2f780e9-d28c-478d-9a91-6c87f1ba7d2c\") " pod="openstack/root-account-create-update-mzb78" Mar 13 13:09:42.161930 master-0 kubenswrapper[28149]: I0313 13:09:42.160988 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58b4f\" (UniqueName: \"kubernetes.io/projected/d2f780e9-d28c-478d-9a91-6c87f1ba7d2c-kube-api-access-58b4f\") pod \"root-account-create-update-mzb78\" (UID: \"d2f780e9-d28c-478d-9a91-6c87f1ba7d2c\") " pod="openstack/root-account-create-update-mzb78" Mar 13 13:09:42.161930 master-0 kubenswrapper[28149]: I0313 13:09:42.161713 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2f780e9-d28c-478d-9a91-6c87f1ba7d2c-operator-scripts\") pod \"root-account-create-update-mzb78\" (UID: \"d2f780e9-d28c-478d-9a91-6c87f1ba7d2c\") " pod="openstack/root-account-create-update-mzb78" Mar 13 13:09:42.196276 master-0 kubenswrapper[28149]: I0313 13:09:42.196226 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58b4f\" (UniqueName: \"kubernetes.io/projected/d2f780e9-d28c-478d-9a91-6c87f1ba7d2c-kube-api-access-58b4f\") pod \"root-account-create-update-mzb78\" (UID: \"d2f780e9-d28c-478d-9a91-6c87f1ba7d2c\") " pod="openstack/root-account-create-update-mzb78" Mar 13 13:09:42.265231 master-0 kubenswrapper[28149]: I0313 13:09:42.265167 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-bv6nc"] Mar 13 13:09:42.267195 master-0 kubenswrapper[28149]: I0313 13:09:42.266808 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.270090 master-0 kubenswrapper[28149]: I0313 13:09:42.269971 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 13 13:09:42.270090 master-0 kubenswrapper[28149]: I0313 13:09:42.270000 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 13 13:09:42.270625 master-0 kubenswrapper[28149]: I0313 13:09:42.270599 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 13 13:09:42.281355 master-0 kubenswrapper[28149]: I0313 13:09:42.280639 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bv6nc"] Mar 13 13:09:42.369442 master-0 kubenswrapper[28149]: I0313 13:09:42.369377 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-combined-ca-bundle\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.369727 master-0 kubenswrapper[28149]: I0313 13:09:42.369608 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lgm7\" (UniqueName: \"kubernetes.io/projected/95064747-74f7-4ab6-95a6-677c5e5d8be2-kube-api-access-4lgm7\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.369822 master-0 kubenswrapper[28149]: I0313 13:09:42.369779 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/95064747-74f7-4ab6-95a6-677c5e5d8be2-etc-swift\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.370119 master-0 kubenswrapper[28149]: I0313 13:09:42.370051 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/95064747-74f7-4ab6-95a6-677c5e5d8be2-ring-data-devices\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.370774 master-0 kubenswrapper[28149]: I0313 13:09:42.370746 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-dispersionconf\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.370849 master-0 kubenswrapper[28149]: I0313 13:09:42.370805 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95064747-74f7-4ab6-95a6-677c5e5d8be2-scripts\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.370918 master-0 kubenswrapper[28149]: I0313 13:09:42.370846 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-swiftconf\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.472412 master-0 kubenswrapper[28149]: I0313 13:09:42.472324 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-dispersionconf\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.472412 master-0 kubenswrapper[28149]: I0313 13:09:42.472391 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95064747-74f7-4ab6-95a6-677c5e5d8be2-scripts\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.472813 master-0 kubenswrapper[28149]: I0313 13:09:42.472617 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-swiftconf\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.472906 master-0 kubenswrapper[28149]: I0313 13:09:42.472878 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-combined-ca-bundle\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.473066 master-0 kubenswrapper[28149]: I0313 13:09:42.473044 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lgm7\" (UniqueName: \"kubernetes.io/projected/95064747-74f7-4ab6-95a6-677c5e5d8be2-kube-api-access-4lgm7\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.473174 master-0 kubenswrapper[28149]: I0313 13:09:42.473150 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/95064747-74f7-4ab6-95a6-677c5e5d8be2-etc-swift\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.473266 master-0 kubenswrapper[28149]: I0313 13:09:42.473246 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/95064747-74f7-4ab6-95a6-677c5e5d8be2-ring-data-devices\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.473695 master-0 kubenswrapper[28149]: I0313 13:09:42.473655 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/95064747-74f7-4ab6-95a6-677c5e5d8be2-etc-swift\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.474022 master-0 kubenswrapper[28149]: I0313 13:09:42.473978 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95064747-74f7-4ab6-95a6-677c5e5d8be2-scripts\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.474117 master-0 kubenswrapper[28149]: I0313 13:09:42.474068 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/95064747-74f7-4ab6-95a6-677c5e5d8be2-ring-data-devices\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.475506 master-0 kubenswrapper[28149]: I0313 13:09:42.475430 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-dispersionconf\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.476670 master-0 kubenswrapper[28149]: I0313 13:09:42.476634 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-combined-ca-bundle\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.476900 master-0 kubenswrapper[28149]: I0313 13:09:42.476874 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-swiftconf\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.481349 master-0 kubenswrapper[28149]: I0313 13:09:42.481321 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzb78" Mar 13 13:09:42.769264 master-0 kubenswrapper[28149]: I0313 13:09:42.768767 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lgm7\" (UniqueName: \"kubernetes.io/projected/95064747-74f7-4ab6-95a6-677c5e5d8be2-kube-api-access-4lgm7\") pod \"swift-ring-rebalance-bv6nc\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:42.770005 master-0 kubenswrapper[28149]: I0313 13:09:42.769958 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="727369ba-25ef-4b46-a4f3-f7e9775b3057" path="/var/lib/kubelet/pods/727369ba-25ef-4b46-a4f3-f7e9775b3057/volumes" Mar 13 13:09:42.821574 master-0 kubenswrapper[28149]: I0313 13:09:42.821521 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 13 13:09:42.939222 master-0 kubenswrapper[28149]: I0313 13:09:42.938946 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:09:43.097894 master-0 kubenswrapper[28149]: I0313 13:09:43.097852 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mzb78"] Mar 13 13:09:43.449397 master-0 kubenswrapper[28149]: I0313 13:09:43.449341 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:43.449770 master-0 kubenswrapper[28149]: E0313 13:09:43.449726 28149 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 13:09:43.449847 master-0 kubenswrapper[28149]: E0313 13:09:43.449778 28149 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 13:09:43.449895 master-0 kubenswrapper[28149]: E0313 13:09:43.449852 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift podName:0e1ffcf0-0cdc-4a69-884c-47edbe0caf50 nodeName:}" failed. No retries permitted until 2026-03-13 13:09:47.449828361 +0000 UTC m=+961.103293520 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift") pod "swift-storage-0" (UID: "0e1ffcf0-0cdc-4a69-884c-47edbe0caf50") : configmap "swift-ring-files" not found Mar 13 13:09:43.465864 master-0 kubenswrapper[28149]: I0313 13:09:43.465818 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bv6nc"] Mar 13 13:09:43.835041 master-0 kubenswrapper[28149]: I0313 13:09:43.834968 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bv6nc" event={"ID":"95064747-74f7-4ab6-95a6-677c5e5d8be2","Type":"ContainerStarted","Data":"ccf46888836d4d6b32f3530f978290a722fc565314b30f27af00e40c1d5077a2"} Mar 13 13:09:43.836578 master-0 kubenswrapper[28149]: I0313 13:09:43.836537 28149 generic.go:334] "Generic (PLEG): container finished" podID="d2f780e9-d28c-478d-9a91-6c87f1ba7d2c" containerID="fd0f55704d8e529fe60229924700911c0c59e1ba7817dbab931e76e86defa07e" exitCode=0 Mar 13 13:09:43.836773 master-0 kubenswrapper[28149]: I0313 13:09:43.836674 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mzb78" event={"ID":"d2f780e9-d28c-478d-9a91-6c87f1ba7d2c","Type":"ContainerDied","Data":"fd0f55704d8e529fe60229924700911c0c59e1ba7817dbab931e76e86defa07e"} Mar 13 13:09:43.836835 master-0 kubenswrapper[28149]: I0313 13:09:43.836792 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mzb78" event={"ID":"d2f780e9-d28c-478d-9a91-6c87f1ba7d2c","Type":"ContainerStarted","Data":"f257518388597e6d71f228c50a07b4730442abadd0d0647be34dfd89ebb45c6f"} Mar 13 13:09:45.254163 master-0 kubenswrapper[28149]: I0313 13:09:45.253959 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-45kqr"] Mar 13 13:09:45.258359 master-0 kubenswrapper[28149]: I0313 13:09:45.256161 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-45kqr" Mar 13 13:09:45.336996 master-0 kubenswrapper[28149]: I0313 13:09:45.336919 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s56x\" (UniqueName: \"kubernetes.io/projected/08c292f4-ce11-41f5-b9ff-7a40e65cf085-kube-api-access-5s56x\") pod \"keystone-db-create-45kqr\" (UID: \"08c292f4-ce11-41f5-b9ff-7a40e65cf085\") " pod="openstack/keystone-db-create-45kqr" Mar 13 13:09:45.337310 master-0 kubenswrapper[28149]: I0313 13:09:45.337079 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08c292f4-ce11-41f5-b9ff-7a40e65cf085-operator-scripts\") pod \"keystone-db-create-45kqr\" (UID: \"08c292f4-ce11-41f5-b9ff-7a40e65cf085\") " pod="openstack/keystone-db-create-45kqr" Mar 13 13:09:45.387248 master-0 kubenswrapper[28149]: I0313 13:09:45.378732 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-45kqr"] Mar 13 13:09:45.438336 master-0 kubenswrapper[28149]: I0313 13:09:45.438273 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s56x\" (UniqueName: \"kubernetes.io/projected/08c292f4-ce11-41f5-b9ff-7a40e65cf085-kube-api-access-5s56x\") pod \"keystone-db-create-45kqr\" (UID: \"08c292f4-ce11-41f5-b9ff-7a40e65cf085\") " pod="openstack/keystone-db-create-45kqr" Mar 13 13:09:45.438588 master-0 kubenswrapper[28149]: I0313 13:09:45.438441 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08c292f4-ce11-41f5-b9ff-7a40e65cf085-operator-scripts\") pod \"keystone-db-create-45kqr\" (UID: \"08c292f4-ce11-41f5-b9ff-7a40e65cf085\") " pod="openstack/keystone-db-create-45kqr" Mar 13 13:09:45.439231 master-0 kubenswrapper[28149]: I0313 13:09:45.439206 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08c292f4-ce11-41f5-b9ff-7a40e65cf085-operator-scripts\") pod \"keystone-db-create-45kqr\" (UID: \"08c292f4-ce11-41f5-b9ff-7a40e65cf085\") " pod="openstack/keystone-db-create-45kqr" Mar 13 13:09:45.581423 master-0 kubenswrapper[28149]: I0313 13:09:45.581372 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s56x\" (UniqueName: \"kubernetes.io/projected/08c292f4-ce11-41f5-b9ff-7a40e65cf085-kube-api-access-5s56x\") pod \"keystone-db-create-45kqr\" (UID: \"08c292f4-ce11-41f5-b9ff-7a40e65cf085\") " pod="openstack/keystone-db-create-45kqr" Mar 13 13:09:45.800001 master-0 kubenswrapper[28149]: I0313 13:09:45.799913 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7c6e-account-create-update-4tgk8"] Mar 13 13:09:45.801893 master-0 kubenswrapper[28149]: I0313 13:09:45.801807 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c6e-account-create-update-4tgk8" Mar 13 13:09:45.817001 master-0 kubenswrapper[28149]: I0313 13:09:45.816795 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 13 13:09:45.863889 master-0 kubenswrapper[28149]: I0313 13:09:45.863672 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/805cfa21-0ee8-4da5-9a9a-1cf852f868c7-operator-scripts\") pod \"keystone-7c6e-account-create-update-4tgk8\" (UID: \"805cfa21-0ee8-4da5-9a9a-1cf852f868c7\") " pod="openstack/keystone-7c6e-account-create-update-4tgk8" Mar 13 13:09:45.864550 master-0 kubenswrapper[28149]: I0313 13:09:45.864421 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbb5b\" (UniqueName: \"kubernetes.io/projected/805cfa21-0ee8-4da5-9a9a-1cf852f868c7-kube-api-access-sbb5b\") pod \"keystone-7c6e-account-create-update-4tgk8\" (UID: \"805cfa21-0ee8-4da5-9a9a-1cf852f868c7\") " pod="openstack/keystone-7c6e-account-create-update-4tgk8" Mar 13 13:09:45.878704 master-0 kubenswrapper[28149]: I0313 13:09:45.878659 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-45kqr" Mar 13 13:09:45.985196 master-0 kubenswrapper[28149]: I0313 13:09:45.967854 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c6e-account-create-update-4tgk8"] Mar 13 13:09:45.985196 master-0 kubenswrapper[28149]: I0313 13:09:45.968244 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/805cfa21-0ee8-4da5-9a9a-1cf852f868c7-operator-scripts\") pod \"keystone-7c6e-account-create-update-4tgk8\" (UID: \"805cfa21-0ee8-4da5-9a9a-1cf852f868c7\") " pod="openstack/keystone-7c6e-account-create-update-4tgk8" Mar 13 13:09:45.985196 master-0 kubenswrapper[28149]: I0313 13:09:45.968486 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbb5b\" (UniqueName: \"kubernetes.io/projected/805cfa21-0ee8-4da5-9a9a-1cf852f868c7-kube-api-access-sbb5b\") pod \"keystone-7c6e-account-create-update-4tgk8\" (UID: \"805cfa21-0ee8-4da5-9a9a-1cf852f868c7\") " pod="openstack/keystone-7c6e-account-create-update-4tgk8" Mar 13 13:09:45.985196 master-0 kubenswrapper[28149]: I0313 13:09:45.969961 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/805cfa21-0ee8-4da5-9a9a-1cf852f868c7-operator-scripts\") pod \"keystone-7c6e-account-create-update-4tgk8\" (UID: \"805cfa21-0ee8-4da5-9a9a-1cf852f868c7\") " pod="openstack/keystone-7c6e-account-create-update-4tgk8" Mar 13 13:09:46.306774 master-0 kubenswrapper[28149]: I0313 13:09:46.306681 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-k5hkp"] Mar 13 13:09:46.309111 master-0 kubenswrapper[28149]: I0313 13:09:46.309076 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k5hkp" Mar 13 13:09:46.395994 master-0 kubenswrapper[28149]: I0313 13:09:46.395916 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4zzr\" (UniqueName: \"kubernetes.io/projected/518ac108-6964-45cd-af8a-d2e8d98cdb39-kube-api-access-c4zzr\") pod \"placement-db-create-k5hkp\" (UID: \"518ac108-6964-45cd-af8a-d2e8d98cdb39\") " pod="openstack/placement-db-create-k5hkp" Mar 13 13:09:46.396171 master-0 kubenswrapper[28149]: I0313 13:09:46.396070 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518ac108-6964-45cd-af8a-d2e8d98cdb39-operator-scripts\") pod \"placement-db-create-k5hkp\" (UID: \"518ac108-6964-45cd-af8a-d2e8d98cdb39\") " pod="openstack/placement-db-create-k5hkp" Mar 13 13:09:46.438829 master-0 kubenswrapper[28149]: I0313 13:09:46.438762 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzb78" Mar 13 13:09:46.498411 master-0 kubenswrapper[28149]: I0313 13:09:46.498335 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4zzr\" (UniqueName: \"kubernetes.io/projected/518ac108-6964-45cd-af8a-d2e8d98cdb39-kube-api-access-c4zzr\") pod \"placement-db-create-k5hkp\" (UID: \"518ac108-6964-45cd-af8a-d2e8d98cdb39\") " pod="openstack/placement-db-create-k5hkp" Mar 13 13:09:46.498411 master-0 kubenswrapper[28149]: I0313 13:09:46.498399 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518ac108-6964-45cd-af8a-d2e8d98cdb39-operator-scripts\") pod \"placement-db-create-k5hkp\" (UID: \"518ac108-6964-45cd-af8a-d2e8d98cdb39\") " pod="openstack/placement-db-create-k5hkp" Mar 13 13:09:46.499537 master-0 kubenswrapper[28149]: I0313 13:09:46.499393 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518ac108-6964-45cd-af8a-d2e8d98cdb39-operator-scripts\") pod \"placement-db-create-k5hkp\" (UID: \"518ac108-6964-45cd-af8a-d2e8d98cdb39\") " pod="openstack/placement-db-create-k5hkp" Mar 13 13:09:46.588229 master-0 kubenswrapper[28149]: I0313 13:09:46.588094 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-k5hkp"] Mar 13 13:09:46.601125 master-0 kubenswrapper[28149]: I0313 13:09:46.600760 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2f780e9-d28c-478d-9a91-6c87f1ba7d2c-operator-scripts\") pod \"d2f780e9-d28c-478d-9a91-6c87f1ba7d2c\" (UID: \"d2f780e9-d28c-478d-9a91-6c87f1ba7d2c\") " Mar 13 13:09:46.601125 master-0 kubenswrapper[28149]: I0313 13:09:46.600824 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58b4f\" (UniqueName: \"kubernetes.io/projected/d2f780e9-d28c-478d-9a91-6c87f1ba7d2c-kube-api-access-58b4f\") pod \"d2f780e9-d28c-478d-9a91-6c87f1ba7d2c\" (UID: \"d2f780e9-d28c-478d-9a91-6c87f1ba7d2c\") " Mar 13 13:09:46.601650 master-0 kubenswrapper[28149]: I0313 13:09:46.601520 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbb5b\" (UniqueName: \"kubernetes.io/projected/805cfa21-0ee8-4da5-9a9a-1cf852f868c7-kube-api-access-sbb5b\") pod \"keystone-7c6e-account-create-update-4tgk8\" (UID: \"805cfa21-0ee8-4da5-9a9a-1cf852f868c7\") " pod="openstack/keystone-7c6e-account-create-update-4tgk8" Mar 13 13:09:46.602978 master-0 kubenswrapper[28149]: I0313 13:09:46.602905 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f780e9-d28c-478d-9a91-6c87f1ba7d2c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d2f780e9-d28c-478d-9a91-6c87f1ba7d2c" (UID: "d2f780e9-d28c-478d-9a91-6c87f1ba7d2c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:46.605670 master-0 kubenswrapper[28149]: I0313 13:09:46.605607 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f780e9-d28c-478d-9a91-6c87f1ba7d2c-kube-api-access-58b4f" (OuterVolumeSpecName: "kube-api-access-58b4f") pod "d2f780e9-d28c-478d-9a91-6c87f1ba7d2c" (UID: "d2f780e9-d28c-478d-9a91-6c87f1ba7d2c"). InnerVolumeSpecName "kube-api-access-58b4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:09:46.704731 master-0 kubenswrapper[28149]: I0313 13:09:46.704656 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2f780e9-d28c-478d-9a91-6c87f1ba7d2c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:46.704731 master-0 kubenswrapper[28149]: I0313 13:09:46.704712 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58b4f\" (UniqueName: \"kubernetes.io/projected/d2f780e9-d28c-478d-9a91-6c87f1ba7d2c-kube-api-access-58b4f\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:46.861022 master-0 kubenswrapper[28149]: I0313 13:09:46.856637 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c6e-account-create-update-4tgk8" Mar 13 13:09:46.864924 master-0 kubenswrapper[28149]: I0313 13:09:46.864815 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4zzr\" (UniqueName: \"kubernetes.io/projected/518ac108-6964-45cd-af8a-d2e8d98cdb39-kube-api-access-c4zzr\") pod \"placement-db-create-k5hkp\" (UID: \"518ac108-6964-45cd-af8a-d2e8d98cdb39\") " pod="openstack/placement-db-create-k5hkp" Mar 13 13:09:46.875297 master-0 kubenswrapper[28149]: I0313 13:09:46.875222 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f29e-account-create-update-z7cvh"] Mar 13 13:09:46.875958 master-0 kubenswrapper[28149]: E0313 13:09:46.875926 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f780e9-d28c-478d-9a91-6c87f1ba7d2c" containerName="mariadb-account-create-update" Mar 13 13:09:46.875958 master-0 kubenswrapper[28149]: I0313 13:09:46.875953 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f780e9-d28c-478d-9a91-6c87f1ba7d2c" containerName="mariadb-account-create-update" Mar 13 13:09:46.876364 master-0 kubenswrapper[28149]: I0313 13:09:46.876340 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f780e9-d28c-478d-9a91-6c87f1ba7d2c" containerName="mariadb-account-create-update" Mar 13 13:09:46.877201 master-0 kubenswrapper[28149]: I0313 13:09:46.877171 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f29e-account-create-update-z7cvh" Mar 13 13:09:46.879158 master-0 kubenswrapper[28149]: I0313 13:09:46.879106 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 13 13:09:46.927423 master-0 kubenswrapper[28149]: I0313 13:09:46.927367 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k5hkp" Mar 13 13:09:46.938760 master-0 kubenswrapper[28149]: I0313 13:09:46.938724 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mzb78" event={"ID":"d2f780e9-d28c-478d-9a91-6c87f1ba7d2c","Type":"ContainerDied","Data":"f257518388597e6d71f228c50a07b4730442abadd0d0647be34dfd89ebb45c6f"} Mar 13 13:09:46.940258 master-0 kubenswrapper[28149]: I0313 13:09:46.940181 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f257518388597e6d71f228c50a07b4730442abadd0d0647be34dfd89ebb45c6f" Mar 13 13:09:46.940374 master-0 kubenswrapper[28149]: I0313 13:09:46.938818 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzb78" Mar 13 13:09:47.034557 master-0 kubenswrapper[28149]: I0313 13:09:47.032178 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f29e-account-create-update-z7cvh"] Mar 13 13:09:47.070080 master-0 kubenswrapper[28149]: I0313 13:09:47.069987 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wl7d\" (UniqueName: \"kubernetes.io/projected/11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4-kube-api-access-5wl7d\") pod \"placement-f29e-account-create-update-z7cvh\" (UID: \"11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4\") " pod="openstack/placement-f29e-account-create-update-z7cvh" Mar 13 13:09:47.070389 master-0 kubenswrapper[28149]: I0313 13:09:47.070316 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4-operator-scripts\") pod \"placement-f29e-account-create-update-z7cvh\" (UID: \"11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4\") " pod="openstack/placement-f29e-account-create-update-z7cvh" Mar 13 13:09:47.172980 master-0 kubenswrapper[28149]: I0313 13:09:47.172758 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4-operator-scripts\") pod \"placement-f29e-account-create-update-z7cvh\" (UID: \"11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4\") " pod="openstack/placement-f29e-account-create-update-z7cvh" Mar 13 13:09:47.172980 master-0 kubenswrapper[28149]: I0313 13:09:47.172869 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wl7d\" (UniqueName: \"kubernetes.io/projected/11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4-kube-api-access-5wl7d\") pod \"placement-f29e-account-create-update-z7cvh\" (UID: \"11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4\") " pod="openstack/placement-f29e-account-create-update-z7cvh" Mar 13 13:09:47.173776 master-0 kubenswrapper[28149]: I0313 13:09:47.173731 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4-operator-scripts\") pod \"placement-f29e-account-create-update-z7cvh\" (UID: \"11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4\") " pod="openstack/placement-f29e-account-create-update-z7cvh" Mar 13 13:09:47.248424 master-0 kubenswrapper[28149]: I0313 13:09:47.248344 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wl7d\" (UniqueName: \"kubernetes.io/projected/11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4-kube-api-access-5wl7d\") pod \"placement-f29e-account-create-update-z7cvh\" (UID: \"11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4\") " pod="openstack/placement-f29e-account-create-update-z7cvh" Mar 13 13:09:47.585784 master-0 kubenswrapper[28149]: I0313 13:09:47.584425 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f29e-account-create-update-z7cvh" Mar 13 13:09:47.588002 master-0 kubenswrapper[28149]: I0313 13:09:47.587934 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:47.588261 master-0 kubenswrapper[28149]: E0313 13:09:47.588185 28149 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 13:09:47.588261 master-0 kubenswrapper[28149]: E0313 13:09:47.588216 28149 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 13:09:47.588388 master-0 kubenswrapper[28149]: E0313 13:09:47.588276 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift podName:0e1ffcf0-0cdc-4a69-884c-47edbe0caf50 nodeName:}" failed. No retries permitted until 2026-03-13 13:09:55.588259729 +0000 UTC m=+969.241724888 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift") pod "swift-storage-0" (UID: "0e1ffcf0-0cdc-4a69-884c-47edbe0caf50") : configmap "swift-ring-files" not found Mar 13 13:09:48.041647 master-0 kubenswrapper[28149]: I0313 13:09:48.041589 28149 trace.go:236] Trace[1672862185]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (13-Mar-2026 13:09:46.950) (total time: 1091ms): Mar 13 13:09:48.041647 master-0 kubenswrapper[28149]: Trace[1672862185]: [1.091286234s] [1.091286234s] END Mar 13 13:09:48.284405 master-0 kubenswrapper[28149]: I0313 13:09:48.284349 28149 trace.go:236] Trace[175141488]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (13-Mar-2026 13:09:46.951) (total time: 1333ms): Mar 13 13:09:48.284405 master-0 kubenswrapper[28149]: Trace[175141488]: [1.333206231s] [1.333206231s] END Mar 13 13:09:48.661393 master-0 kubenswrapper[28149]: I0313 13:09:48.661328 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:09:49.953077 master-0 kubenswrapper[28149]: I0313 13:09:49.953004 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-45kqr"] Mar 13 13:09:49.968806 master-0 kubenswrapper[28149]: I0313 13:09:49.968752 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f29e-account-create-update-z7cvh"] Mar 13 13:09:49.982341 master-0 kubenswrapper[28149]: I0313 13:09:49.982276 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c6e-account-create-update-4tgk8"] Mar 13 13:09:50.527400 master-0 kubenswrapper[28149]: W0313 13:09:50.527336 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod805cfa21_0ee8_4da5_9a9a_1cf852f868c7.slice/crio-169ab003b85a874ae1d5dcfc52745bb1228eda2b01fffa34218ec865502269a7 WatchSource:0}: Error finding container 169ab003b85a874ae1d5dcfc52745bb1228eda2b01fffa34218ec865502269a7: Status 404 returned error can't find the container with id 169ab003b85a874ae1d5dcfc52745bb1228eda2b01fffa34218ec865502269a7 Mar 13 13:09:50.529820 master-0 kubenswrapper[28149]: W0313 13:09:50.529775 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11ae55ca_4c2e_47f9_9c32_96a9ee19a4e4.slice/crio-551ac9be95bd6693b1e272cdecca333fc7ef376b08400074b601bfca60fdf7a5 WatchSource:0}: Error finding container 551ac9be95bd6693b1e272cdecca333fc7ef376b08400074b601bfca60fdf7a5: Status 404 returned error can't find the container with id 551ac9be95bd6693b1e272cdecca333fc7ef376b08400074b601bfca60fdf7a5 Mar 13 13:09:50.532983 master-0 kubenswrapper[28149]: W0313 13:09:50.532933 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08c292f4_ce11_41f5_b9ff_7a40e65cf085.slice/crio-0d0823b9d136a088f6dc651e36e902c8da4d4dc8cdda92b8fe16410a29600465 WatchSource:0}: Error finding container 0d0823b9d136a088f6dc651e36e902c8da4d4dc8cdda92b8fe16410a29600465: Status 404 returned error can't find the container with id 0d0823b9d136a088f6dc651e36e902c8da4d4dc8cdda92b8fe16410a29600465 Mar 13 13:09:50.595432 master-0 kubenswrapper[28149]: I0313 13:09:50.595253 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 13 13:09:50.598228 master-0 kubenswrapper[28149]: I0313 13:09:50.598171 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-gztb5"] Mar 13 13:09:50.600263 master-0 kubenswrapper[28149]: I0313 13:09:50.600209 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" podUID="afde1d0c-cef9-4fb3-94d0-f88cab0b4e01" containerName="dnsmasq-dns" containerID="cri-o://d131bfef4eb105aff051c55f10bdb6b294ee6cf9c636d703df83cfe2951d467e" gracePeriod=10 Mar 13 13:09:50.608588 master-0 kubenswrapper[28149]: W0313 13:09:50.608543 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod518ac108_6964_45cd_af8a_d2e8d98cdb39.slice/crio-bf8be3f7a4e573b7e12c18aecbc5f59db3eab81416e49452e50e4a01be246040 WatchSource:0}: Error finding container bf8be3f7a4e573b7e12c18aecbc5f59db3eab81416e49452e50e4a01be246040: Status 404 returned error can't find the container with id bf8be3f7a4e573b7e12c18aecbc5f59db3eab81416e49452e50e4a01be246040 Mar 13 13:09:50.611316 master-0 kubenswrapper[28149]: I0313 13:09:50.611223 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-k5hkp"] Mar 13 13:09:51.000567 master-0 kubenswrapper[28149]: I0313 13:09:51.000439 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2vrz2"] Mar 13 13:09:51.003083 master-0 kubenswrapper[28149]: I0313 13:09:51.001877 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2vrz2" Mar 13 13:09:51.027208 master-0 kubenswrapper[28149]: I0313 13:09:51.027069 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f29e-account-create-update-z7cvh" event={"ID":"11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4","Type":"ContainerStarted","Data":"551ac9be95bd6693b1e272cdecca333fc7ef376b08400074b601bfca60fdf7a5"} Mar 13 13:09:51.033232 master-0 kubenswrapper[28149]: I0313 13:09:51.030488 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c6e-account-create-update-4tgk8" event={"ID":"805cfa21-0ee8-4da5-9a9a-1cf852f868c7","Type":"ContainerStarted","Data":"169ab003b85a874ae1d5dcfc52745bb1228eda2b01fffa34218ec865502269a7"} Mar 13 13:09:51.033232 master-0 kubenswrapper[28149]: I0313 13:09:51.032877 28149 generic.go:334] "Generic (PLEG): container finished" podID="afde1d0c-cef9-4fb3-94d0-f88cab0b4e01" containerID="d131bfef4eb105aff051c55f10bdb6b294ee6cf9c636d703df83cfe2951d467e" exitCode=0 Mar 13 13:09:51.033232 master-0 kubenswrapper[28149]: I0313 13:09:51.032915 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" event={"ID":"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01","Type":"ContainerDied","Data":"d131bfef4eb105aff051c55f10bdb6b294ee6cf9c636d703df83cfe2951d467e"} Mar 13 13:09:51.033701 master-0 kubenswrapper[28149]: I0313 13:09:51.033672 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k5hkp" event={"ID":"518ac108-6964-45cd-af8a-d2e8d98cdb39","Type":"ContainerStarted","Data":"bf8be3f7a4e573b7e12c18aecbc5f59db3eab81416e49452e50e4a01be246040"} Mar 13 13:09:51.034526 master-0 kubenswrapper[28149]: I0313 13:09:51.034498 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-45kqr" event={"ID":"08c292f4-ce11-41f5-b9ff-7a40e65cf085","Type":"ContainerStarted","Data":"0d0823b9d136a088f6dc651e36e902c8da4d4dc8cdda92b8fe16410a29600465"} Mar 13 13:09:51.064700 master-0 kubenswrapper[28149]: I0313 13:09:51.064640 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d3ef396-7b26-4828-98c2-3d3acd135ed6-operator-scripts\") pod \"glance-db-create-2vrz2\" (UID: \"5d3ef396-7b26-4828-98c2-3d3acd135ed6\") " pod="openstack/glance-db-create-2vrz2" Mar 13 13:09:51.064812 master-0 kubenswrapper[28149]: I0313 13:09:51.064796 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx5gd\" (UniqueName: \"kubernetes.io/projected/5d3ef396-7b26-4828-98c2-3d3acd135ed6-kube-api-access-wx5gd\") pod \"glance-db-create-2vrz2\" (UID: \"5d3ef396-7b26-4828-98c2-3d3acd135ed6\") " pod="openstack/glance-db-create-2vrz2" Mar 13 13:09:51.247635 master-0 kubenswrapper[28149]: I0313 13:09:51.247575 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d3ef396-7b26-4828-98c2-3d3acd135ed6-operator-scripts\") pod \"glance-db-create-2vrz2\" (UID: \"5d3ef396-7b26-4828-98c2-3d3acd135ed6\") " pod="openstack/glance-db-create-2vrz2" Mar 13 13:09:51.247635 master-0 kubenswrapper[28149]: I0313 13:09:51.247643 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx5gd\" (UniqueName: \"kubernetes.io/projected/5d3ef396-7b26-4828-98c2-3d3acd135ed6-kube-api-access-wx5gd\") pod \"glance-db-create-2vrz2\" (UID: \"5d3ef396-7b26-4828-98c2-3d3acd135ed6\") " pod="openstack/glance-db-create-2vrz2" Mar 13 13:09:51.248958 master-0 kubenswrapper[28149]: I0313 13:09:51.248935 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d3ef396-7b26-4828-98c2-3d3acd135ed6-operator-scripts\") pod \"glance-db-create-2vrz2\" (UID: \"5d3ef396-7b26-4828-98c2-3d3acd135ed6\") " pod="openstack/glance-db-create-2vrz2" Mar 13 13:09:51.277066 master-0 kubenswrapper[28149]: I0313 13:09:51.276993 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" podUID="afde1d0c-cef9-4fb3-94d0-f88cab0b4e01" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.168:5353: connect: connection refused" Mar 13 13:09:51.516244 master-0 kubenswrapper[28149]: I0313 13:09:51.516171 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2vrz2"] Mar 13 13:09:51.795006 master-0 kubenswrapper[28149]: I0313 13:09:51.794946 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" Mar 13 13:09:52.058163 master-0 kubenswrapper[28149]: I0313 13:09:52.058105 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-config\") pod \"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01\" (UID: \"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01\") " Mar 13 13:09:52.058761 master-0 kubenswrapper[28149]: I0313 13:09:52.058742 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbdgf\" (UniqueName: \"kubernetes.io/projected/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-kube-api-access-sbdgf\") pod \"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01\" (UID: \"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01\") " Mar 13 13:09:52.058860 master-0 kubenswrapper[28149]: I0313 13:09:52.058846 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-dns-svc\") pod \"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01\" (UID: \"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01\") " Mar 13 13:09:52.066319 master-0 kubenswrapper[28149]: I0313 13:09:52.064164 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4f01-account-create-update-wcwmp"] Mar 13 13:09:52.066319 master-0 kubenswrapper[28149]: E0313 13:09:52.064789 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afde1d0c-cef9-4fb3-94d0-f88cab0b4e01" containerName="init" Mar 13 13:09:52.066319 master-0 kubenswrapper[28149]: I0313 13:09:52.064807 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="afde1d0c-cef9-4fb3-94d0-f88cab0b4e01" containerName="init" Mar 13 13:09:52.066319 master-0 kubenswrapper[28149]: E0313 13:09:52.064829 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afde1d0c-cef9-4fb3-94d0-f88cab0b4e01" containerName="dnsmasq-dns" Mar 13 13:09:52.066319 master-0 kubenswrapper[28149]: I0313 13:09:52.064853 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="afde1d0c-cef9-4fb3-94d0-f88cab0b4e01" containerName="dnsmasq-dns" Mar 13 13:09:52.066319 master-0 kubenswrapper[28149]: I0313 13:09:52.065263 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="afde1d0c-cef9-4fb3-94d0-f88cab0b4e01" containerName="dnsmasq-dns" Mar 13 13:09:52.066319 master-0 kubenswrapper[28149]: I0313 13:09:52.066232 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f01-account-create-update-wcwmp" Mar 13 13:09:52.083595 master-0 kubenswrapper[28149]: I0313 13:09:52.075637 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 13 13:09:52.083595 master-0 kubenswrapper[28149]: I0313 13:09:52.079717 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-kube-api-access-sbdgf" (OuterVolumeSpecName: "kube-api-access-sbdgf") pod "afde1d0c-cef9-4fb3-94d0-f88cab0b4e01" (UID: "afde1d0c-cef9-4fb3-94d0-f88cab0b4e01"). InnerVolumeSpecName "kube-api-access-sbdgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:09:52.101169 master-0 kubenswrapper[28149]: I0313 13:09:52.087109 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-45kqr" event={"ID":"08c292f4-ce11-41f5-b9ff-7a40e65cf085","Type":"ContainerStarted","Data":"81cac9f39ac0d0fef197556c6635b0b5934ce49c2079ffaab3bcef42ff8b37df"} Mar 13 13:09:52.112304 master-0 kubenswrapper[28149]: I0313 13:09:52.103374 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f29e-account-create-update-z7cvh" event={"ID":"11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4","Type":"ContainerStarted","Data":"520d807536cb81674b261119b513a944bc9aecd64b99e8d30ab702a455e478c7"} Mar 13 13:09:52.112304 master-0 kubenswrapper[28149]: I0313 13:09:52.108781 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx5gd\" (UniqueName: \"kubernetes.io/projected/5d3ef396-7b26-4828-98c2-3d3acd135ed6-kube-api-access-wx5gd\") pod \"glance-db-create-2vrz2\" (UID: \"5d3ef396-7b26-4828-98c2-3d3acd135ed6\") " pod="openstack/glance-db-create-2vrz2" Mar 13 13:09:52.120258 master-0 kubenswrapper[28149]: I0313 13:09:52.119442 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-config" (OuterVolumeSpecName: "config") pod "afde1d0c-cef9-4fb3-94d0-f88cab0b4e01" (UID: "afde1d0c-cef9-4fb3-94d0-f88cab0b4e01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:52.130223 master-0 kubenswrapper[28149]: I0313 13:09:52.128042 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k5hkp" event={"ID":"518ac108-6964-45cd-af8a-d2e8d98cdb39","Type":"ContainerStarted","Data":"3d0a4a0185e798f6fc54f3fbdcda4ec68ec99dfff280faac11b95eba9ef1cfed"} Mar 13 13:09:52.132526 master-0 kubenswrapper[28149]: I0313 13:09:52.132467 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c6e-account-create-update-4tgk8" event={"ID":"805cfa21-0ee8-4da5-9a9a-1cf852f868c7","Type":"ContainerStarted","Data":"407957e257e8da1a98a5c817d538a13906dc183b411240d2308cc40c356bf613"} Mar 13 13:09:52.135867 master-0 kubenswrapper[28149]: I0313 13:09:52.135492 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" Mar 13 13:09:52.135867 master-0 kubenswrapper[28149]: I0313 13:09:52.135524 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-gztb5" event={"ID":"afde1d0c-cef9-4fb3-94d0-f88cab0b4e01","Type":"ContainerDied","Data":"997cea4faaba4e11c64cbc647f79005f14481b206f8a7f202867c6bf5a6d3d8d"} Mar 13 13:09:52.135867 master-0 kubenswrapper[28149]: I0313 13:09:52.135574 28149 scope.go:117] "RemoveContainer" containerID="d131bfef4eb105aff051c55f10bdb6b294ee6cf9c636d703df83cfe2951d467e" Mar 13 13:09:52.158820 master-0 kubenswrapper[28149]: I0313 13:09:52.158755 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bv6nc" event={"ID":"95064747-74f7-4ab6-95a6-677c5e5d8be2","Type":"ContainerStarted","Data":"f8607e4541f79ae7c985f58812107bd8dcc45d675115fedadfc353466fdcdfda"} Mar 13 13:09:52.162271 master-0 kubenswrapper[28149]: I0313 13:09:52.162201 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6cmv\" (UniqueName: \"kubernetes.io/projected/5e39d6bc-2e33-484e-ac03-f7b1bb0352c8-kube-api-access-j6cmv\") pod \"glance-4f01-account-create-update-wcwmp\" (UID: \"5e39d6bc-2e33-484e-ac03-f7b1bb0352c8\") " pod="openstack/glance-4f01-account-create-update-wcwmp" Mar 13 13:09:52.162474 master-0 kubenswrapper[28149]: I0313 13:09:52.162331 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e39d6bc-2e33-484e-ac03-f7b1bb0352c8-operator-scripts\") pod \"glance-4f01-account-create-update-wcwmp\" (UID: \"5e39d6bc-2e33-484e-ac03-f7b1bb0352c8\") " pod="openstack/glance-4f01-account-create-update-wcwmp" Mar 13 13:09:52.162983 master-0 kubenswrapper[28149]: I0313 13:09:52.162802 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:52.162983 master-0 kubenswrapper[28149]: I0313 13:09:52.162821 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbdgf\" (UniqueName: \"kubernetes.io/projected/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-kube-api-access-sbdgf\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:52.163801 master-0 kubenswrapper[28149]: I0313 13:09:52.163753 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "afde1d0c-cef9-4fb3-94d0-f88cab0b4e01" (UID: "afde1d0c-cef9-4fb3-94d0-f88cab0b4e01"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:52.197209 master-0 kubenswrapper[28149]: I0313 13:09:52.197172 28149 scope.go:117] "RemoveContainer" containerID="2e93b7c7b28ba7b25377e317ea55e9d6c7f25575950c735bdfeb90a9b6628d32" Mar 13 13:09:52.240752 master-0 kubenswrapper[28149]: I0313 13:09:52.240703 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2vrz2" Mar 13 13:09:52.265645 master-0 kubenswrapper[28149]: I0313 13:09:52.264199 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6cmv\" (UniqueName: \"kubernetes.io/projected/5e39d6bc-2e33-484e-ac03-f7b1bb0352c8-kube-api-access-j6cmv\") pod \"glance-4f01-account-create-update-wcwmp\" (UID: \"5e39d6bc-2e33-484e-ac03-f7b1bb0352c8\") " pod="openstack/glance-4f01-account-create-update-wcwmp" Mar 13 13:09:52.265645 master-0 kubenswrapper[28149]: I0313 13:09:52.264281 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e39d6bc-2e33-484e-ac03-f7b1bb0352c8-operator-scripts\") pod \"glance-4f01-account-create-update-wcwmp\" (UID: \"5e39d6bc-2e33-484e-ac03-f7b1bb0352c8\") " pod="openstack/glance-4f01-account-create-update-wcwmp" Mar 13 13:09:52.265645 master-0 kubenswrapper[28149]: I0313 13:09:52.264412 28149 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:52.265645 master-0 kubenswrapper[28149]: I0313 13:09:52.265095 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e39d6bc-2e33-484e-ac03-f7b1bb0352c8-operator-scripts\") pod \"glance-4f01-account-create-update-wcwmp\" (UID: \"5e39d6bc-2e33-484e-ac03-f7b1bb0352c8\") " pod="openstack/glance-4f01-account-create-update-wcwmp" Mar 13 13:09:52.555166 master-0 kubenswrapper[28149]: I0313 13:09:52.554455 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4f01-account-create-update-wcwmp"] Mar 13 13:09:53.141566 master-0 kubenswrapper[28149]: I0313 13:09:53.141505 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6cmv\" (UniqueName: \"kubernetes.io/projected/5e39d6bc-2e33-484e-ac03-f7b1bb0352c8-kube-api-access-j6cmv\") pod \"glance-4f01-account-create-update-wcwmp\" (UID: \"5e39d6bc-2e33-484e-ac03-f7b1bb0352c8\") " pod="openstack/glance-4f01-account-create-update-wcwmp" Mar 13 13:09:53.168342 master-0 kubenswrapper[28149]: I0313 13:09:53.168245 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-45kqr" podStartSLOduration=8.168222611 podStartE2EDuration="8.168222611s" podCreationTimestamp="2026-03-13 13:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:09:53.143010814 +0000 UTC m=+966.796475973" watchObservedRunningTime="2026-03-13 13:09:53.168222611 +0000 UTC m=+966.821687760" Mar 13 13:09:53.209360 master-0 kubenswrapper[28149]: I0313 13:09:53.206794 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7c6e-account-create-update-4tgk8" podStartSLOduration=8.206769045 podStartE2EDuration="8.206769045s" podCreationTimestamp="2026-03-13 13:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:09:53.184755865 +0000 UTC m=+966.838221024" watchObservedRunningTime="2026-03-13 13:09:53.206769045 +0000 UTC m=+966.860234204" Mar 13 13:09:53.236226 master-0 kubenswrapper[28149]: I0313 13:09:53.236097 28149 generic.go:334] "Generic (PLEG): container finished" podID="518ac108-6964-45cd-af8a-d2e8d98cdb39" containerID="3d0a4a0185e798f6fc54f3fbdcda4ec68ec99dfff280faac11b95eba9ef1cfed" exitCode=0 Mar 13 13:09:53.236426 master-0 kubenswrapper[28149]: I0313 13:09:53.236240 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k5hkp" event={"ID":"518ac108-6964-45cd-af8a-d2e8d98cdb39","Type":"ContainerDied","Data":"3d0a4a0185e798f6fc54f3fbdcda4ec68ec99dfff280faac11b95eba9ef1cfed"} Mar 13 13:09:53.238957 master-0 kubenswrapper[28149]: I0313 13:09:53.238894 28149 generic.go:334] "Generic (PLEG): container finished" podID="08c292f4-ce11-41f5-b9ff-7a40e65cf085" containerID="81cac9f39ac0d0fef197556c6635b0b5934ce49c2079ffaab3bcef42ff8b37df" exitCode=0 Mar 13 13:09:53.239108 master-0 kubenswrapper[28149]: I0313 13:09:53.238980 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-45kqr" event={"ID":"08c292f4-ce11-41f5-b9ff-7a40e65cf085","Type":"ContainerDied","Data":"81cac9f39ac0d0fef197556c6635b0b5934ce49c2079ffaab3bcef42ff8b37df"} Mar 13 13:09:53.239108 master-0 kubenswrapper[28149]: I0313 13:09:53.239032 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2vrz2"] Mar 13 13:09:53.241171 master-0 kubenswrapper[28149]: I0313 13:09:53.241079 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-bv6nc" podStartSLOduration=4.07591415 podStartE2EDuration="11.241058877s" podCreationTimestamp="2026-03-13 13:09:42 +0000 UTC" firstStartedPulling="2026-03-13 13:09:43.461360471 +0000 UTC m=+957.114825640" lastFinishedPulling="2026-03-13 13:09:50.626505198 +0000 UTC m=+964.279970367" observedRunningTime="2026-03-13 13:09:53.210850385 +0000 UTC m=+966.864315564" watchObservedRunningTime="2026-03-13 13:09:53.241058877 +0000 UTC m=+966.894524036" Mar 13 13:09:53.255950 master-0 kubenswrapper[28149]: I0313 13:09:53.255690 28149 generic.go:334] "Generic (PLEG): container finished" podID="11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4" containerID="520d807536cb81674b261119b513a944bc9aecd64b99e8d30ab702a455e478c7" exitCode=0 Mar 13 13:09:53.255950 master-0 kubenswrapper[28149]: I0313 13:09:53.255792 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f29e-account-create-update-z7cvh" event={"ID":"11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4","Type":"ContainerDied","Data":"520d807536cb81674b261119b513a944bc9aecd64b99e8d30ab702a455e478c7"} Mar 13 13:09:53.258926 master-0 kubenswrapper[28149]: I0313 13:09:53.258844 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f29e-account-create-update-z7cvh" podStartSLOduration=7.258820994 podStartE2EDuration="7.258820994s" podCreationTimestamp="2026-03-13 13:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:09:53.230324238 +0000 UTC m=+966.883789397" watchObservedRunningTime="2026-03-13 13:09:53.258820994 +0000 UTC m=+966.912286153" Mar 13 13:09:53.285265 master-0 kubenswrapper[28149]: I0313 13:09:53.284755 28149 generic.go:334] "Generic (PLEG): container finished" podID="805cfa21-0ee8-4da5-9a9a-1cf852f868c7" containerID="407957e257e8da1a98a5c817d538a13906dc183b411240d2308cc40c356bf613" exitCode=0 Mar 13 13:09:53.286344 master-0 kubenswrapper[28149]: I0313 13:09:53.286315 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c6e-account-create-update-4tgk8" event={"ID":"805cfa21-0ee8-4da5-9a9a-1cf852f868c7","Type":"ContainerDied","Data":"407957e257e8da1a98a5c817d538a13906dc183b411240d2308cc40c356bf613"} Mar 13 13:09:53.297868 master-0 kubenswrapper[28149]: I0313 13:09:53.297351 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-k5hkp" podStartSLOduration=8.297327687 podStartE2EDuration="8.297327687s" podCreationTimestamp="2026-03-13 13:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:09:53.290413912 +0000 UTC m=+966.943879091" watchObservedRunningTime="2026-03-13 13:09:53.297327687 +0000 UTC m=+966.950792846" Mar 13 13:09:53.387654 master-0 kubenswrapper[28149]: I0313 13:09:53.384687 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f01-account-create-update-wcwmp" Mar 13 13:09:53.419280 master-0 kubenswrapper[28149]: I0313 13:09:53.412582 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-gztb5"] Mar 13 13:09:53.446227 master-0 kubenswrapper[28149]: I0313 13:09:53.442252 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-gztb5"] Mar 13 13:09:53.683732 master-0 kubenswrapper[28149]: I0313 13:09:53.681294 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mzb78"] Mar 13 13:09:53.700861 master-0 kubenswrapper[28149]: I0313 13:09:53.700805 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mzb78"] Mar 13 13:09:54.241633 master-0 kubenswrapper[28149]: W0313 13:09:54.241549 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e39d6bc_2e33_484e_ac03_f7b1bb0352c8.slice/crio-0f7b6a8613f684707467e6956671d25813ef317d800ae72eeb2928aee53505ad WatchSource:0}: Error finding container 0f7b6a8613f684707467e6956671d25813ef317d800ae72eeb2928aee53505ad: Status 404 returned error can't find the container with id 0f7b6a8613f684707467e6956671d25813ef317d800ae72eeb2928aee53505ad Mar 13 13:09:54.242109 master-0 kubenswrapper[28149]: I0313 13:09:54.242060 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4f01-account-create-update-wcwmp"] Mar 13 13:09:54.311948 master-0 kubenswrapper[28149]: I0313 13:09:54.311878 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f01-account-create-update-wcwmp" event={"ID":"5e39d6bc-2e33-484e-ac03-f7b1bb0352c8","Type":"ContainerStarted","Data":"0f7b6a8613f684707467e6956671d25813ef317d800ae72eeb2928aee53505ad"} Mar 13 13:09:54.316266 master-0 kubenswrapper[28149]: I0313 13:09:54.315948 28149 generic.go:334] "Generic (PLEG): container finished" podID="5d3ef396-7b26-4828-98c2-3d3acd135ed6" containerID="8cea29442a555e878214d7982407688fc382effeb13a0eec24c534dfebd82ca1" exitCode=0 Mar 13 13:09:54.316266 master-0 kubenswrapper[28149]: I0313 13:09:54.316054 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2vrz2" event={"ID":"5d3ef396-7b26-4828-98c2-3d3acd135ed6","Type":"ContainerDied","Data":"8cea29442a555e878214d7982407688fc382effeb13a0eec24c534dfebd82ca1"} Mar 13 13:09:54.316266 master-0 kubenswrapper[28149]: I0313 13:09:54.316090 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2vrz2" event={"ID":"5d3ef396-7b26-4828-98c2-3d3acd135ed6","Type":"ContainerStarted","Data":"59e14985c889ed61a035bb0b0fcb71af956dd1863bd210359b36fa2caf9b0a81"} Mar 13 13:09:54.707508 master-0 kubenswrapper[28149]: I0313 13:09:54.707421 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afde1d0c-cef9-4fb3-94d0-f88cab0b4e01" path="/var/lib/kubelet/pods/afde1d0c-cef9-4fb3-94d0-f88cab0b4e01/volumes" Mar 13 13:09:54.708708 master-0 kubenswrapper[28149]: I0313 13:09:54.708684 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f780e9-d28c-478d-9a91-6c87f1ba7d2c" path="/var/lib/kubelet/pods/d2f780e9-d28c-478d-9a91-6c87f1ba7d2c/volumes" Mar 13 13:09:55.107977 master-0 kubenswrapper[28149]: I0313 13:09:55.107914 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f29e-account-create-update-z7cvh" Mar 13 13:09:55.275106 master-0 kubenswrapper[28149]: I0313 13:09:55.275061 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4-operator-scripts\") pod \"11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4\" (UID: \"11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4\") " Mar 13 13:09:55.275615 master-0 kubenswrapper[28149]: I0313 13:09:55.275287 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wl7d\" (UniqueName: \"kubernetes.io/projected/11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4-kube-api-access-5wl7d\") pod \"11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4\" (UID: \"11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4\") " Mar 13 13:09:55.276266 master-0 kubenswrapper[28149]: I0313 13:09:55.276228 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4" (UID: "11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:55.277079 master-0 kubenswrapper[28149]: I0313 13:09:55.277040 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:55.280603 master-0 kubenswrapper[28149]: I0313 13:09:55.280537 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4-kube-api-access-5wl7d" (OuterVolumeSpecName: "kube-api-access-5wl7d") pod "11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4" (UID: "11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4"). InnerVolumeSpecName "kube-api-access-5wl7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:09:55.336517 master-0 kubenswrapper[28149]: I0313 13:09:55.333416 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-45kqr" event={"ID":"08c292f4-ce11-41f5-b9ff-7a40e65cf085","Type":"ContainerDied","Data":"0d0823b9d136a088f6dc651e36e902c8da4d4dc8cdda92b8fe16410a29600465"} Mar 13 13:09:55.336517 master-0 kubenswrapper[28149]: I0313 13:09:55.333468 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d0823b9d136a088f6dc651e36e902c8da4d4dc8cdda92b8fe16410a29600465" Mar 13 13:09:55.336517 master-0 kubenswrapper[28149]: I0313 13:09:55.335688 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f29e-account-create-update-z7cvh" event={"ID":"11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4","Type":"ContainerDied","Data":"551ac9be95bd6693b1e272cdecca333fc7ef376b08400074b601bfca60fdf7a5"} Mar 13 13:09:55.336517 master-0 kubenswrapper[28149]: I0313 13:09:55.335726 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="551ac9be95bd6693b1e272cdecca333fc7ef376b08400074b601bfca60fdf7a5" Mar 13 13:09:55.336517 master-0 kubenswrapper[28149]: I0313 13:09:55.335786 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f29e-account-create-update-z7cvh" Mar 13 13:09:55.340053 master-0 kubenswrapper[28149]: I0313 13:09:55.340012 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c6e-account-create-update-4tgk8" event={"ID":"805cfa21-0ee8-4da5-9a9a-1cf852f868c7","Type":"ContainerDied","Data":"169ab003b85a874ae1d5dcfc52745bb1228eda2b01fffa34218ec865502269a7"} Mar 13 13:09:55.340149 master-0 kubenswrapper[28149]: I0313 13:09:55.340060 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="169ab003b85a874ae1d5dcfc52745bb1228eda2b01fffa34218ec865502269a7" Mar 13 13:09:55.342199 master-0 kubenswrapper[28149]: I0313 13:09:55.342159 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k5hkp" event={"ID":"518ac108-6964-45cd-af8a-d2e8d98cdb39","Type":"ContainerDied","Data":"bf8be3f7a4e573b7e12c18aecbc5f59db3eab81416e49452e50e4a01be246040"} Mar 13 13:09:55.342268 master-0 kubenswrapper[28149]: I0313 13:09:55.342203 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf8be3f7a4e573b7e12c18aecbc5f59db3eab81416e49452e50e4a01be246040" Mar 13 13:09:55.344349 master-0 kubenswrapper[28149]: I0313 13:09:55.344306 28149 generic.go:334] "Generic (PLEG): container finished" podID="5e39d6bc-2e33-484e-ac03-f7b1bb0352c8" containerID="de05d0db1863604656929a4f8ed7334e5b3f3ade2693a775963b621247b0a13a" exitCode=0 Mar 13 13:09:55.344669 master-0 kubenswrapper[28149]: I0313 13:09:55.344639 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f01-account-create-update-wcwmp" event={"ID":"5e39d6bc-2e33-484e-ac03-f7b1bb0352c8","Type":"ContainerDied","Data":"de05d0db1863604656929a4f8ed7334e5b3f3ade2693a775963b621247b0a13a"} Mar 13 13:09:55.381609 master-0 kubenswrapper[28149]: I0313 13:09:55.379656 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wl7d\" (UniqueName: \"kubernetes.io/projected/11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4-kube-api-access-5wl7d\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:55.408595 master-0 kubenswrapper[28149]: I0313 13:09:55.408040 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c6e-account-create-update-4tgk8" Mar 13 13:09:55.417323 master-0 kubenswrapper[28149]: I0313 13:09:55.417213 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k5hkp" Mar 13 13:09:55.435096 master-0 kubenswrapper[28149]: I0313 13:09:55.435026 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-45kqr" Mar 13 13:09:55.481041 master-0 kubenswrapper[28149]: I0313 13:09:55.480969 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbb5b\" (UniqueName: \"kubernetes.io/projected/805cfa21-0ee8-4da5-9a9a-1cf852f868c7-kube-api-access-sbb5b\") pod \"805cfa21-0ee8-4da5-9a9a-1cf852f868c7\" (UID: \"805cfa21-0ee8-4da5-9a9a-1cf852f868c7\") " Mar 13 13:09:55.481329 master-0 kubenswrapper[28149]: I0313 13:09:55.481298 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518ac108-6964-45cd-af8a-d2e8d98cdb39-operator-scripts\") pod \"518ac108-6964-45cd-af8a-d2e8d98cdb39\" (UID: \"518ac108-6964-45cd-af8a-d2e8d98cdb39\") " Mar 13 13:09:55.481422 master-0 kubenswrapper[28149]: I0313 13:09:55.481383 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4zzr\" (UniqueName: \"kubernetes.io/projected/518ac108-6964-45cd-af8a-d2e8d98cdb39-kube-api-access-c4zzr\") pod \"518ac108-6964-45cd-af8a-d2e8d98cdb39\" (UID: \"518ac108-6964-45cd-af8a-d2e8d98cdb39\") " Mar 13 13:09:55.481522 master-0 kubenswrapper[28149]: I0313 13:09:55.481493 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/805cfa21-0ee8-4da5-9a9a-1cf852f868c7-operator-scripts\") pod \"805cfa21-0ee8-4da5-9a9a-1cf852f868c7\" (UID: \"805cfa21-0ee8-4da5-9a9a-1cf852f868c7\") " Mar 13 13:09:55.492308 master-0 kubenswrapper[28149]: I0313 13:09:55.492102 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/518ac108-6964-45cd-af8a-d2e8d98cdb39-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "518ac108-6964-45cd-af8a-d2e8d98cdb39" (UID: "518ac108-6964-45cd-af8a-d2e8d98cdb39"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:55.492574 master-0 kubenswrapper[28149]: I0313 13:09:55.492429 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/805cfa21-0ee8-4da5-9a9a-1cf852f868c7-kube-api-access-sbb5b" (OuterVolumeSpecName: "kube-api-access-sbb5b") pod "805cfa21-0ee8-4da5-9a9a-1cf852f868c7" (UID: "805cfa21-0ee8-4da5-9a9a-1cf852f868c7"). InnerVolumeSpecName "kube-api-access-sbb5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:09:55.502300 master-0 kubenswrapper[28149]: I0313 13:09:55.501795 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805cfa21-0ee8-4da5-9a9a-1cf852f868c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "805cfa21-0ee8-4da5-9a9a-1cf852f868c7" (UID: "805cfa21-0ee8-4da5-9a9a-1cf852f868c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:55.503981 master-0 kubenswrapper[28149]: I0313 13:09:55.503563 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/518ac108-6964-45cd-af8a-d2e8d98cdb39-kube-api-access-c4zzr" (OuterVolumeSpecName: "kube-api-access-c4zzr") pod "518ac108-6964-45cd-af8a-d2e8d98cdb39" (UID: "518ac108-6964-45cd-af8a-d2e8d98cdb39"). InnerVolumeSpecName "kube-api-access-c4zzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:09:55.667500 master-0 kubenswrapper[28149]: I0313 13:09:55.667100 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s56x\" (UniqueName: \"kubernetes.io/projected/08c292f4-ce11-41f5-b9ff-7a40e65cf085-kube-api-access-5s56x\") pod \"08c292f4-ce11-41f5-b9ff-7a40e65cf085\" (UID: \"08c292f4-ce11-41f5-b9ff-7a40e65cf085\") " Mar 13 13:09:55.667500 master-0 kubenswrapper[28149]: I0313 13:09:55.667173 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08c292f4-ce11-41f5-b9ff-7a40e65cf085-operator-scripts\") pod \"08c292f4-ce11-41f5-b9ff-7a40e65cf085\" (UID: \"08c292f4-ce11-41f5-b9ff-7a40e65cf085\") " Mar 13 13:09:55.668064 master-0 kubenswrapper[28149]: I0313 13:09:55.667890 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:09:55.668263 master-0 kubenswrapper[28149]: I0313 13:09:55.668226 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/805cfa21-0ee8-4da5-9a9a-1cf852f868c7-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:55.668263 master-0 kubenswrapper[28149]: I0313 13:09:55.668245 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbb5b\" (UniqueName: \"kubernetes.io/projected/805cfa21-0ee8-4da5-9a9a-1cf852f868c7-kube-api-access-sbb5b\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:55.668263 master-0 kubenswrapper[28149]: I0313 13:09:55.668257 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518ac108-6964-45cd-af8a-d2e8d98cdb39-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:55.668263 master-0 kubenswrapper[28149]: I0313 13:09:55.668266 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4zzr\" (UniqueName: \"kubernetes.io/projected/518ac108-6964-45cd-af8a-d2e8d98cdb39-kube-api-access-c4zzr\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:55.668425 master-0 kubenswrapper[28149]: E0313 13:09:55.668391 28149 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 13:09:55.668425 master-0 kubenswrapper[28149]: E0313 13:09:55.668405 28149 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 13:09:55.668491 master-0 kubenswrapper[28149]: E0313 13:09:55.668448 28149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift podName:0e1ffcf0-0cdc-4a69-884c-47edbe0caf50 nodeName:}" failed. No retries permitted until 2026-03-13 13:10:11.668435139 +0000 UTC m=+985.321900298 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift") pod "swift-storage-0" (UID: "0e1ffcf0-0cdc-4a69-884c-47edbe0caf50") : configmap "swift-ring-files" not found Mar 13 13:09:55.670057 master-0 kubenswrapper[28149]: I0313 13:09:55.670007 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c292f4-ce11-41f5-b9ff-7a40e65cf085-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08c292f4-ce11-41f5-b9ff-7a40e65cf085" (UID: "08c292f4-ce11-41f5-b9ff-7a40e65cf085"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:55.672681 master-0 kubenswrapper[28149]: I0313 13:09:55.672571 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c292f4-ce11-41f5-b9ff-7a40e65cf085-kube-api-access-5s56x" (OuterVolumeSpecName: "kube-api-access-5s56x") pod "08c292f4-ce11-41f5-b9ff-7a40e65cf085" (UID: "08c292f4-ce11-41f5-b9ff-7a40e65cf085"). InnerVolumeSpecName "kube-api-access-5s56x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:09:55.771318 master-0 kubenswrapper[28149]: I0313 13:09:55.771257 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s56x\" (UniqueName: \"kubernetes.io/projected/08c292f4-ce11-41f5-b9ff-7a40e65cf085-kube-api-access-5s56x\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:55.771573 master-0 kubenswrapper[28149]: I0313 13:09:55.771337 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08c292f4-ce11-41f5-b9ff-7a40e65cf085-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:55.997132 master-0 kubenswrapper[28149]: I0313 13:09:55.997093 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2vrz2" Mar 13 13:09:56.077533 master-0 kubenswrapper[28149]: I0313 13:09:56.077420 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx5gd\" (UniqueName: \"kubernetes.io/projected/5d3ef396-7b26-4828-98c2-3d3acd135ed6-kube-api-access-wx5gd\") pod \"5d3ef396-7b26-4828-98c2-3d3acd135ed6\" (UID: \"5d3ef396-7b26-4828-98c2-3d3acd135ed6\") " Mar 13 13:09:56.078263 master-0 kubenswrapper[28149]: I0313 13:09:56.077573 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d3ef396-7b26-4828-98c2-3d3acd135ed6-operator-scripts\") pod \"5d3ef396-7b26-4828-98c2-3d3acd135ed6\" (UID: \"5d3ef396-7b26-4828-98c2-3d3acd135ed6\") " Mar 13 13:09:56.078338 master-0 kubenswrapper[28149]: I0313 13:09:56.078229 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d3ef396-7b26-4828-98c2-3d3acd135ed6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d3ef396-7b26-4828-98c2-3d3acd135ed6" (UID: "5d3ef396-7b26-4828-98c2-3d3acd135ed6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:56.079116 master-0 kubenswrapper[28149]: I0313 13:09:56.079083 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d3ef396-7b26-4828-98c2-3d3acd135ed6-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:56.080996 master-0 kubenswrapper[28149]: I0313 13:09:56.080933 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d3ef396-7b26-4828-98c2-3d3acd135ed6-kube-api-access-wx5gd" (OuterVolumeSpecName: "kube-api-access-wx5gd") pod "5d3ef396-7b26-4828-98c2-3d3acd135ed6" (UID: "5d3ef396-7b26-4828-98c2-3d3acd135ed6"). InnerVolumeSpecName "kube-api-access-wx5gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:09:56.184861 master-0 kubenswrapper[28149]: I0313 13:09:56.184595 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx5gd\" (UniqueName: \"kubernetes.io/projected/5d3ef396-7b26-4828-98c2-3d3acd135ed6-kube-api-access-wx5gd\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:56.357168 master-0 kubenswrapper[28149]: I0313 13:09:56.357091 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2vrz2" event={"ID":"5d3ef396-7b26-4828-98c2-3d3acd135ed6","Type":"ContainerDied","Data":"59e14985c889ed61a035bb0b0fcb71af956dd1863bd210359b36fa2caf9b0a81"} Mar 13 13:09:56.357682 master-0 kubenswrapper[28149]: I0313 13:09:56.357169 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59e14985c889ed61a035bb0b0fcb71af956dd1863bd210359b36fa2caf9b0a81" Mar 13 13:09:56.357682 master-0 kubenswrapper[28149]: I0313 13:09:56.357203 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2vrz2" Mar 13 13:09:56.357682 master-0 kubenswrapper[28149]: I0313 13:09:56.357191 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k5hkp" Mar 13 13:09:56.357682 master-0 kubenswrapper[28149]: I0313 13:09:56.357252 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-45kqr" Mar 13 13:09:56.357682 master-0 kubenswrapper[28149]: I0313 13:09:56.357368 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c6e-account-create-update-4tgk8" Mar 13 13:09:57.030020 master-0 kubenswrapper[28149]: I0313 13:09:57.029973 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f01-account-create-update-wcwmp" Mar 13 13:09:57.089632 master-0 kubenswrapper[28149]: I0313 13:09:57.089299 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e39d6bc-2e33-484e-ac03-f7b1bb0352c8-operator-scripts\") pod \"5e39d6bc-2e33-484e-ac03-f7b1bb0352c8\" (UID: \"5e39d6bc-2e33-484e-ac03-f7b1bb0352c8\") " Mar 13 13:09:57.089931 master-0 kubenswrapper[28149]: I0313 13:09:57.089902 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e39d6bc-2e33-484e-ac03-f7b1bb0352c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e39d6bc-2e33-484e-ac03-f7b1bb0352c8" (UID: "5e39d6bc-2e33-484e-ac03-f7b1bb0352c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:09:57.090203 master-0 kubenswrapper[28149]: I0313 13:09:57.090157 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6cmv\" (UniqueName: \"kubernetes.io/projected/5e39d6bc-2e33-484e-ac03-f7b1bb0352c8-kube-api-access-j6cmv\") pod \"5e39d6bc-2e33-484e-ac03-f7b1bb0352c8\" (UID: \"5e39d6bc-2e33-484e-ac03-f7b1bb0352c8\") " Mar 13 13:09:57.092357 master-0 kubenswrapper[28149]: I0313 13:09:57.092322 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e39d6bc-2e33-484e-ac03-f7b1bb0352c8-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:57.094769 master-0 kubenswrapper[28149]: I0313 13:09:57.094719 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e39d6bc-2e33-484e-ac03-f7b1bb0352c8-kube-api-access-j6cmv" (OuterVolumeSpecName: "kube-api-access-j6cmv") pod "5e39d6bc-2e33-484e-ac03-f7b1bb0352c8" (UID: "5e39d6bc-2e33-484e-ac03-f7b1bb0352c8"). InnerVolumeSpecName "kube-api-access-j6cmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:09:57.194522 master-0 kubenswrapper[28149]: I0313 13:09:57.194406 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6cmv\" (UniqueName: \"kubernetes.io/projected/5e39d6bc-2e33-484e-ac03-f7b1bb0352c8-kube-api-access-j6cmv\") on node \"master-0\" DevicePath \"\"" Mar 13 13:09:57.371701 master-0 kubenswrapper[28149]: I0313 13:09:57.371627 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f01-account-create-update-wcwmp" event={"ID":"5e39d6bc-2e33-484e-ac03-f7b1bb0352c8","Type":"ContainerDied","Data":"0f7b6a8613f684707467e6956671d25813ef317d800ae72eeb2928aee53505ad"} Mar 13 13:09:57.371701 master-0 kubenswrapper[28149]: I0313 13:09:57.371678 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f7b6a8613f684707467e6956671d25813ef317d800ae72eeb2928aee53505ad" Mar 13 13:09:57.371701 master-0 kubenswrapper[28149]: I0313 13:09:57.371682 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f01-account-create-update-wcwmp" Mar 13 13:09:58.498589 master-0 kubenswrapper[28149]: I0313 13:09:58.498530 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rkdmk"] Mar 13 13:09:58.499184 master-0 kubenswrapper[28149]: E0313 13:09:58.498994 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c292f4-ce11-41f5-b9ff-7a40e65cf085" containerName="mariadb-database-create" Mar 13 13:09:58.499184 master-0 kubenswrapper[28149]: I0313 13:09:58.499008 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c292f4-ce11-41f5-b9ff-7a40e65cf085" containerName="mariadb-database-create" Mar 13 13:09:58.499184 master-0 kubenswrapper[28149]: E0313 13:09:58.499022 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518ac108-6964-45cd-af8a-d2e8d98cdb39" containerName="mariadb-database-create" Mar 13 13:09:58.499184 master-0 kubenswrapper[28149]: I0313 13:09:58.499028 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="518ac108-6964-45cd-af8a-d2e8d98cdb39" containerName="mariadb-database-create" Mar 13 13:09:58.499184 master-0 kubenswrapper[28149]: E0313 13:09:58.499058 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d3ef396-7b26-4828-98c2-3d3acd135ed6" containerName="mariadb-database-create" Mar 13 13:09:58.499184 master-0 kubenswrapper[28149]: I0313 13:09:58.499066 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d3ef396-7b26-4828-98c2-3d3acd135ed6" containerName="mariadb-database-create" Mar 13 13:09:58.499184 master-0 kubenswrapper[28149]: E0313 13:09:58.499087 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e39d6bc-2e33-484e-ac03-f7b1bb0352c8" containerName="mariadb-account-create-update" Mar 13 13:09:58.499184 master-0 kubenswrapper[28149]: I0313 13:09:58.499093 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e39d6bc-2e33-484e-ac03-f7b1bb0352c8" containerName="mariadb-account-create-update" Mar 13 13:09:58.499184 master-0 kubenswrapper[28149]: E0313 13:09:58.499105 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805cfa21-0ee8-4da5-9a9a-1cf852f868c7" containerName="mariadb-account-create-update" Mar 13 13:09:58.499184 master-0 kubenswrapper[28149]: I0313 13:09:58.499111 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="805cfa21-0ee8-4da5-9a9a-1cf852f868c7" containerName="mariadb-account-create-update" Mar 13 13:09:58.499184 master-0 kubenswrapper[28149]: E0313 13:09:58.499126 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4" containerName="mariadb-account-create-update" Mar 13 13:09:58.499184 master-0 kubenswrapper[28149]: I0313 13:09:58.499150 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4" containerName="mariadb-account-create-update" Mar 13 13:09:58.499729 master-0 kubenswrapper[28149]: I0313 13:09:58.499414 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c292f4-ce11-41f5-b9ff-7a40e65cf085" containerName="mariadb-database-create" Mar 13 13:09:58.499729 master-0 kubenswrapper[28149]: I0313 13:09:58.499429 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d3ef396-7b26-4828-98c2-3d3acd135ed6" containerName="mariadb-database-create" Mar 13 13:09:58.499729 master-0 kubenswrapper[28149]: I0313 13:09:58.499443 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="518ac108-6964-45cd-af8a-d2e8d98cdb39" containerName="mariadb-database-create" Mar 13 13:09:58.499729 master-0 kubenswrapper[28149]: I0313 13:09:58.499461 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4" containerName="mariadb-account-create-update" Mar 13 13:09:58.499729 master-0 kubenswrapper[28149]: I0313 13:09:58.499471 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="805cfa21-0ee8-4da5-9a9a-1cf852f868c7" containerName="mariadb-account-create-update" Mar 13 13:09:58.499729 master-0 kubenswrapper[28149]: I0313 13:09:58.499484 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e39d6bc-2e33-484e-ac03-f7b1bb0352c8" containerName="mariadb-account-create-update" Mar 13 13:09:58.510389 master-0 kubenswrapper[28149]: I0313 13:09:58.510321 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rkdmk" Mar 13 13:09:58.517918 master-0 kubenswrapper[28149]: I0313 13:09:58.515192 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 13 13:09:58.528232 master-0 kubenswrapper[28149]: I0313 13:09:58.526745 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rkdmk"] Mar 13 13:09:58.625932 master-0 kubenswrapper[28149]: I0313 13:09:58.624649 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 13 13:09:58.644578 master-0 kubenswrapper[28149]: I0313 13:09:58.644365 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/220bdc89-22fc-4966-847c-550dad12dd5a-operator-scripts\") pod \"root-account-create-update-rkdmk\" (UID: \"220bdc89-22fc-4966-847c-550dad12dd5a\") " pod="openstack/root-account-create-update-rkdmk" Mar 13 13:09:58.644578 master-0 kubenswrapper[28149]: I0313 13:09:58.644472 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52pj7\" (UniqueName: \"kubernetes.io/projected/220bdc89-22fc-4966-847c-550dad12dd5a-kube-api-access-52pj7\") pod \"root-account-create-update-rkdmk\" (UID: \"220bdc89-22fc-4966-847c-550dad12dd5a\") " pod="openstack/root-account-create-update-rkdmk" Mar 13 13:09:58.749803 master-0 kubenswrapper[28149]: I0313 13:09:58.749670 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/220bdc89-22fc-4966-847c-550dad12dd5a-operator-scripts\") pod \"root-account-create-update-rkdmk\" (UID: \"220bdc89-22fc-4966-847c-550dad12dd5a\") " pod="openstack/root-account-create-update-rkdmk" Mar 13 13:09:58.750545 master-0 kubenswrapper[28149]: I0313 13:09:58.750502 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52pj7\" (UniqueName: \"kubernetes.io/projected/220bdc89-22fc-4966-847c-550dad12dd5a-kube-api-access-52pj7\") pod \"root-account-create-update-rkdmk\" (UID: \"220bdc89-22fc-4966-847c-550dad12dd5a\") " pod="openstack/root-account-create-update-rkdmk" Mar 13 13:09:58.750864 master-0 kubenswrapper[28149]: I0313 13:09:58.750747 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/220bdc89-22fc-4966-847c-550dad12dd5a-operator-scripts\") pod \"root-account-create-update-rkdmk\" (UID: \"220bdc89-22fc-4966-847c-550dad12dd5a\") " pod="openstack/root-account-create-update-rkdmk" Mar 13 13:09:58.776427 master-0 kubenswrapper[28149]: I0313 13:09:58.776352 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52pj7\" (UniqueName: \"kubernetes.io/projected/220bdc89-22fc-4966-847c-550dad12dd5a-kube-api-access-52pj7\") pod \"root-account-create-update-rkdmk\" (UID: \"220bdc89-22fc-4966-847c-550dad12dd5a\") " pod="openstack/root-account-create-update-rkdmk" Mar 13 13:09:58.849889 master-0 kubenswrapper[28149]: I0313 13:09:58.842554 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rkdmk" Mar 13 13:09:59.311505 master-0 kubenswrapper[28149]: W0313 13:09:59.311450 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod220bdc89_22fc_4966_847c_550dad12dd5a.slice/crio-eeeb438c0715f47d094e9c55b2af3cc9ade4aea6cd21a622045c2d848eec2cf4 WatchSource:0}: Error finding container eeeb438c0715f47d094e9c55b2af3cc9ade4aea6cd21a622045c2d848eec2cf4: Status 404 returned error can't find the container with id eeeb438c0715f47d094e9c55b2af3cc9ade4aea6cd21a622045c2d848eec2cf4 Mar 13 13:09:59.323010 master-0 kubenswrapper[28149]: I0313 13:09:59.322951 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rkdmk"] Mar 13 13:09:59.394659 master-0 kubenswrapper[28149]: I0313 13:09:59.394413 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rkdmk" event={"ID":"220bdc89-22fc-4966-847c-550dad12dd5a","Type":"ContainerStarted","Data":"eeeb438c0715f47d094e9c55b2af3cc9ade4aea6cd21a622045c2d848eec2cf4"} Mar 13 13:10:00.407855 master-0 kubenswrapper[28149]: I0313 13:10:00.406935 28149 generic.go:334] "Generic (PLEG): container finished" podID="220bdc89-22fc-4966-847c-550dad12dd5a" containerID="1286ed9e45c61d65b56f55ee3a7528c4be82d00c2983313fe54099b397d9ee66" exitCode=0 Mar 13 13:10:00.407855 master-0 kubenswrapper[28149]: I0313 13:10:00.407002 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rkdmk" event={"ID":"220bdc89-22fc-4966-847c-550dad12dd5a","Type":"ContainerDied","Data":"1286ed9e45c61d65b56f55ee3a7528c4be82d00c2983313fe54099b397d9ee66"} Mar 13 13:10:01.162646 master-0 kubenswrapper[28149]: I0313 13:10:01.162587 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-tfhs6"] Mar 13 13:10:01.164265 master-0 kubenswrapper[28149]: I0313 13:10:01.164037 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:01.172575 master-0 kubenswrapper[28149]: I0313 13:10:01.172518 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-e6fbd-config-data" Mar 13 13:10:01.221169 master-0 kubenswrapper[28149]: I0313 13:10:01.216596 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tfhs6"] Mar 13 13:10:01.277273 master-0 kubenswrapper[28149]: I0313 13:10:01.272101 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-db-sync-config-data\") pod \"glance-db-sync-tfhs6\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:01.277273 master-0 kubenswrapper[28149]: I0313 13:10:01.272348 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-config-data\") pod \"glance-db-sync-tfhs6\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:01.277273 master-0 kubenswrapper[28149]: I0313 13:10:01.272398 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qwm6\" (UniqueName: \"kubernetes.io/projected/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-kube-api-access-6qwm6\") pod \"glance-db-sync-tfhs6\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:01.277273 master-0 kubenswrapper[28149]: I0313 13:10:01.272517 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-combined-ca-bundle\") pod \"glance-db-sync-tfhs6\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:01.600786 master-0 kubenswrapper[28149]: I0313 13:10:01.599167 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qwm6\" (UniqueName: \"kubernetes.io/projected/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-kube-api-access-6qwm6\") pod \"glance-db-sync-tfhs6\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:01.600786 master-0 kubenswrapper[28149]: I0313 13:10:01.599296 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-combined-ca-bundle\") pod \"glance-db-sync-tfhs6\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:01.600786 master-0 kubenswrapper[28149]: I0313 13:10:01.599509 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-db-sync-config-data\") pod \"glance-db-sync-tfhs6\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:01.600786 master-0 kubenswrapper[28149]: I0313 13:10:01.599577 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-config-data\") pod \"glance-db-sync-tfhs6\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:01.605208 master-0 kubenswrapper[28149]: I0313 13:10:01.605083 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-db-sync-config-data\") pod \"glance-db-sync-tfhs6\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:01.605437 master-0 kubenswrapper[28149]: I0313 13:10:01.605372 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-config-data\") pod \"glance-db-sync-tfhs6\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:01.619200 master-0 kubenswrapper[28149]: I0313 13:10:01.614162 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-combined-ca-bundle\") pod \"glance-db-sync-tfhs6\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:01.641348 master-0 kubenswrapper[28149]: I0313 13:10:01.640023 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qwm6\" (UniqueName: \"kubernetes.io/projected/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-kube-api-access-6qwm6\") pod \"glance-db-sync-tfhs6\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:01.874730 master-0 kubenswrapper[28149]: I0313 13:10:01.874616 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:02.370280 master-0 kubenswrapper[28149]: I0313 13:10:02.370238 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rkdmk" Mar 13 13:10:02.595190 master-0 kubenswrapper[28149]: I0313 13:10:02.594888 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52pj7\" (UniqueName: \"kubernetes.io/projected/220bdc89-22fc-4966-847c-550dad12dd5a-kube-api-access-52pj7\") pod \"220bdc89-22fc-4966-847c-550dad12dd5a\" (UID: \"220bdc89-22fc-4966-847c-550dad12dd5a\") " Mar 13 13:10:02.595190 master-0 kubenswrapper[28149]: I0313 13:10:02.595169 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/220bdc89-22fc-4966-847c-550dad12dd5a-operator-scripts\") pod \"220bdc89-22fc-4966-847c-550dad12dd5a\" (UID: \"220bdc89-22fc-4966-847c-550dad12dd5a\") " Mar 13 13:10:02.596784 master-0 kubenswrapper[28149]: I0313 13:10:02.595956 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/220bdc89-22fc-4966-847c-550dad12dd5a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "220bdc89-22fc-4966-847c-550dad12dd5a" (UID: "220bdc89-22fc-4966-847c-550dad12dd5a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:02.607244 master-0 kubenswrapper[28149]: I0313 13:10:02.607164 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/220bdc89-22fc-4966-847c-550dad12dd5a-kube-api-access-52pj7" (OuterVolumeSpecName: "kube-api-access-52pj7") pod "220bdc89-22fc-4966-847c-550dad12dd5a" (UID: "220bdc89-22fc-4966-847c-550dad12dd5a"). InnerVolumeSpecName "kube-api-access-52pj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:02.674044 master-0 kubenswrapper[28149]: I0313 13:10:02.673983 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rkdmk" event={"ID":"220bdc89-22fc-4966-847c-550dad12dd5a","Type":"ContainerDied","Data":"eeeb438c0715f47d094e9c55b2af3cc9ade4aea6cd21a622045c2d848eec2cf4"} Mar 13 13:10:02.674044 master-0 kubenswrapper[28149]: I0313 13:10:02.674033 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeeb438c0715f47d094e9c55b2af3cc9ade4aea6cd21a622045c2d848eec2cf4" Mar 13 13:10:02.674337 master-0 kubenswrapper[28149]: I0313 13:10:02.674219 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rkdmk" Mar 13 13:10:02.675952 master-0 kubenswrapper[28149]: I0313 13:10:02.675869 28149 generic.go:334] "Generic (PLEG): container finished" podID="f417d425-c062-40de-a92b-17afe412cfe9" containerID="8526904de7e21ece4a5d704b8892d7888dd6da85962ab4fee729b25f5076d9e0" exitCode=0 Mar 13 13:10:02.676047 master-0 kubenswrapper[28149]: I0313 13:10:02.676017 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f417d425-c062-40de-a92b-17afe412cfe9","Type":"ContainerDied","Data":"8526904de7e21ece4a5d704b8892d7888dd6da85962ab4fee729b25f5076d9e0"} Mar 13 13:10:02.696968 master-0 kubenswrapper[28149]: I0313 13:10:02.696911 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52pj7\" (UniqueName: \"kubernetes.io/projected/220bdc89-22fc-4966-847c-550dad12dd5a-kube-api-access-52pj7\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:02.696968 master-0 kubenswrapper[28149]: I0313 13:10:02.696959 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/220bdc89-22fc-4966-847c-550dad12dd5a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:02.786488 master-0 kubenswrapper[28149]: I0313 13:10:02.781632 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tfhs6"] Mar 13 13:10:02.808495 master-0 kubenswrapper[28149]: I0313 13:10:02.807611 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5qn6j" podUID="4e18d218-b3e4-49a7-b8c0-5e27c6f4e4e2" containerName="ovn-controller" probeResult="failure" output=< Mar 13 13:10:02.808495 master-0 kubenswrapper[28149]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 13 13:10:02.808495 master-0 kubenswrapper[28149]: > Mar 13 13:10:03.161348 master-0 kubenswrapper[28149]: I0313 13:10:03.161235 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:10:03.165239 master-0 kubenswrapper[28149]: I0313 13:10:03.165102 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-m2gpt" Mar 13 13:10:03.522900 master-0 kubenswrapper[28149]: I0313 13:10:03.522820 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5qn6j-config-9wwt8"] Mar 13 13:10:03.523758 master-0 kubenswrapper[28149]: E0313 13:10:03.523505 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="220bdc89-22fc-4966-847c-550dad12dd5a" containerName="mariadb-account-create-update" Mar 13 13:10:03.523758 master-0 kubenswrapper[28149]: I0313 13:10:03.523532 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="220bdc89-22fc-4966-847c-550dad12dd5a" containerName="mariadb-account-create-update" Mar 13 13:10:03.523950 master-0 kubenswrapper[28149]: I0313 13:10:03.523912 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="220bdc89-22fc-4966-847c-550dad12dd5a" containerName="mariadb-account-create-update" Mar 13 13:10:03.524991 master-0 kubenswrapper[28149]: I0313 13:10:03.524956 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.526851 master-0 kubenswrapper[28149]: I0313 13:10:03.526804 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 13 13:10:03.539216 master-0 kubenswrapper[28149]: I0313 13:10:03.539153 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5qn6j-config-9wwt8"] Mar 13 13:10:03.571027 master-0 kubenswrapper[28149]: I0313 13:10:03.570956 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd295\" (UniqueName: \"kubernetes.io/projected/b39d916f-ac1c-445c-8f64-43f28c477d65-kube-api-access-qd295\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.571279 master-0 kubenswrapper[28149]: I0313 13:10:03.571086 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-run-ovn\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.571279 master-0 kubenswrapper[28149]: I0313 13:10:03.571104 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b39d916f-ac1c-445c-8f64-43f28c477d65-additional-scripts\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.571279 master-0 kubenswrapper[28149]: I0313 13:10:03.571122 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b39d916f-ac1c-445c-8f64-43f28c477d65-scripts\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.571279 master-0 kubenswrapper[28149]: I0313 13:10:03.571238 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-log-ovn\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.571279 master-0 kubenswrapper[28149]: I0313 13:10:03.571257 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-run\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.857965 master-0 kubenswrapper[28149]: I0313 13:10:03.856565 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd295\" (UniqueName: \"kubernetes.io/projected/b39d916f-ac1c-445c-8f64-43f28c477d65-kube-api-access-qd295\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.857965 master-0 kubenswrapper[28149]: I0313 13:10:03.856675 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-run-ovn\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.857965 master-0 kubenswrapper[28149]: I0313 13:10:03.856715 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b39d916f-ac1c-445c-8f64-43f28c477d65-additional-scripts\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.857965 master-0 kubenswrapper[28149]: I0313 13:10:03.856746 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b39d916f-ac1c-445c-8f64-43f28c477d65-scripts\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.857965 master-0 kubenswrapper[28149]: I0313 13:10:03.856833 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-log-ovn\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.857965 master-0 kubenswrapper[28149]: I0313 13:10:03.856876 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-run\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.863573 master-0 kubenswrapper[28149]: I0313 13:10:03.859117 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-log-ovn\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.863573 master-0 kubenswrapper[28149]: I0313 13:10:03.859216 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-run\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.863573 master-0 kubenswrapper[28149]: I0313 13:10:03.859406 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-run-ovn\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.863573 master-0 kubenswrapper[28149]: I0313 13:10:03.860919 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b39d916f-ac1c-445c-8f64-43f28c477d65-additional-scripts\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.863573 master-0 kubenswrapper[28149]: I0313 13:10:03.862542 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b39d916f-ac1c-445c-8f64-43f28c477d65-scripts\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.883979 master-0 kubenswrapper[28149]: I0313 13:10:03.882735 28149 generic.go:334] "Generic (PLEG): container finished" podID="95064747-74f7-4ab6-95a6-677c5e5d8be2" containerID="f8607e4541f79ae7c985f58812107bd8dcc45d675115fedadfc353466fdcdfda" exitCode=0 Mar 13 13:10:03.883979 master-0 kubenswrapper[28149]: I0313 13:10:03.882807 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bv6nc" event={"ID":"95064747-74f7-4ab6-95a6-677c5e5d8be2","Type":"ContainerDied","Data":"f8607e4541f79ae7c985f58812107bd8dcc45d675115fedadfc353466fdcdfda"} Mar 13 13:10:03.887909 master-0 kubenswrapper[28149]: I0313 13:10:03.885982 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tfhs6" event={"ID":"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a","Type":"ContainerStarted","Data":"e9ea5f6065dcf68f302fe88c1dd5ef0c6c66b2db9e4c930cb5449835c8cdbaa0"} Mar 13 13:10:03.887909 master-0 kubenswrapper[28149]: I0313 13:10:03.887520 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd295\" (UniqueName: \"kubernetes.io/projected/b39d916f-ac1c-445c-8f64-43f28c477d65-kube-api-access-qd295\") pod \"ovn-controller-5qn6j-config-9wwt8\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:03.891767 master-0 kubenswrapper[28149]: I0313 13:10:03.891665 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f417d425-c062-40de-a92b-17afe412cfe9","Type":"ContainerStarted","Data":"b21a4e85acd652763ea9fa68f90c9a249482c4f176613799926791afef8046aa"} Mar 13 13:10:03.892358 master-0 kubenswrapper[28149]: I0313 13:10:03.892338 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 13 13:10:03.938125 master-0 kubenswrapper[28149]: I0313 13:10:03.938045 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=49.458524959 podStartE2EDuration="1m13.938021694s" podCreationTimestamp="2026-03-13 13:08:50 +0000 UTC" firstStartedPulling="2026-03-13 13:08:58.516233759 +0000 UTC m=+912.169698918" lastFinishedPulling="2026-03-13 13:09:22.995730494 +0000 UTC m=+936.649195653" observedRunningTime="2026-03-13 13:10:03.935804394 +0000 UTC m=+977.589269563" watchObservedRunningTime="2026-03-13 13:10:03.938021694 +0000 UTC m=+977.591486853" Mar 13 13:10:04.157154 master-0 kubenswrapper[28149]: I0313 13:10:04.156984 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:04.791030 master-0 kubenswrapper[28149]: I0313 13:10:04.790996 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5qn6j-config-9wwt8"] Mar 13 13:10:04.908693 master-0 kubenswrapper[28149]: I0313 13:10:04.908624 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5qn6j-config-9wwt8" event={"ID":"b39d916f-ac1c-445c-8f64-43f28c477d65","Type":"ContainerStarted","Data":"09b4308e48babdbadb9682e6d7aa8dcf95cab6b8453869ba1709e3d4d05ac165"} Mar 13 13:10:05.547684 master-0 kubenswrapper[28149]: I0313 13:10:05.547645 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:10:05.696532 master-0 kubenswrapper[28149]: I0313 13:10:05.696472 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/95064747-74f7-4ab6-95a6-677c5e5d8be2-etc-swift\") pod \"95064747-74f7-4ab6-95a6-677c5e5d8be2\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " Mar 13 13:10:05.696893 master-0 kubenswrapper[28149]: I0313 13:10:05.696607 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-swiftconf\") pod \"95064747-74f7-4ab6-95a6-677c5e5d8be2\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " Mar 13 13:10:05.697034 master-0 kubenswrapper[28149]: I0313 13:10:05.696781 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95064747-74f7-4ab6-95a6-677c5e5d8be2-scripts\") pod \"95064747-74f7-4ab6-95a6-677c5e5d8be2\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " Mar 13 13:10:05.697034 master-0 kubenswrapper[28149]: I0313 13:10:05.696984 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lgm7\" (UniqueName: \"kubernetes.io/projected/95064747-74f7-4ab6-95a6-677c5e5d8be2-kube-api-access-4lgm7\") pod \"95064747-74f7-4ab6-95a6-677c5e5d8be2\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " Mar 13 13:10:05.697034 master-0 kubenswrapper[28149]: I0313 13:10:05.697014 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/95064747-74f7-4ab6-95a6-677c5e5d8be2-ring-data-devices\") pod \"95064747-74f7-4ab6-95a6-677c5e5d8be2\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " Mar 13 13:10:05.698487 master-0 kubenswrapper[28149]: I0313 13:10:05.697054 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-dispersionconf\") pod \"95064747-74f7-4ab6-95a6-677c5e5d8be2\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " Mar 13 13:10:05.698487 master-0 kubenswrapper[28149]: I0313 13:10:05.697133 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-combined-ca-bundle\") pod \"95064747-74f7-4ab6-95a6-677c5e5d8be2\" (UID: \"95064747-74f7-4ab6-95a6-677c5e5d8be2\") " Mar 13 13:10:05.698487 master-0 kubenswrapper[28149]: I0313 13:10:05.697749 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95064747-74f7-4ab6-95a6-677c5e5d8be2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "95064747-74f7-4ab6-95a6-677c5e5d8be2" (UID: "95064747-74f7-4ab6-95a6-677c5e5d8be2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:10:05.698487 master-0 kubenswrapper[28149]: I0313 13:10:05.698032 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95064747-74f7-4ab6-95a6-677c5e5d8be2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "95064747-74f7-4ab6-95a6-677c5e5d8be2" (UID: "95064747-74f7-4ab6-95a6-677c5e5d8be2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:05.699118 master-0 kubenswrapper[28149]: I0313 13:10:05.698776 28149 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/95064747-74f7-4ab6-95a6-677c5e5d8be2-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:05.699118 master-0 kubenswrapper[28149]: I0313 13:10:05.698802 28149 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/95064747-74f7-4ab6-95a6-677c5e5d8be2-etc-swift\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:05.702353 master-0 kubenswrapper[28149]: I0313 13:10:05.702310 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95064747-74f7-4ab6-95a6-677c5e5d8be2-kube-api-access-4lgm7" (OuterVolumeSpecName: "kube-api-access-4lgm7") pod "95064747-74f7-4ab6-95a6-677c5e5d8be2" (UID: "95064747-74f7-4ab6-95a6-677c5e5d8be2"). InnerVolumeSpecName "kube-api-access-4lgm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:05.707816 master-0 kubenswrapper[28149]: I0313 13:10:05.707764 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "95064747-74f7-4ab6-95a6-677c5e5d8be2" (UID: "95064747-74f7-4ab6-95a6-677c5e5d8be2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:05.968785 master-0 kubenswrapper[28149]: I0313 13:10:05.968028 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lgm7\" (UniqueName: \"kubernetes.io/projected/95064747-74f7-4ab6-95a6-677c5e5d8be2-kube-api-access-4lgm7\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:05.968785 master-0 kubenswrapper[28149]: I0313 13:10:05.968071 28149 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-dispersionconf\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:05.989524 master-0 kubenswrapper[28149]: I0313 13:10:05.989201 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "95064747-74f7-4ab6-95a6-677c5e5d8be2" (UID: "95064747-74f7-4ab6-95a6-677c5e5d8be2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:05.999055 master-0 kubenswrapper[28149]: I0313 13:10:05.998964 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95064747-74f7-4ab6-95a6-677c5e5d8be2-scripts" (OuterVolumeSpecName: "scripts") pod "95064747-74f7-4ab6-95a6-677c5e5d8be2" (UID: "95064747-74f7-4ab6-95a6-677c5e5d8be2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:06.038181 master-0 kubenswrapper[28149]: I0313 13:10:06.036419 28149 generic.go:334] "Generic (PLEG): container finished" podID="b39d916f-ac1c-445c-8f64-43f28c477d65" containerID="2f0ab736eae25f82c43a5d9e5e54f1c5c9b7bb1b519213efb5be7a4963b6b941" exitCode=0 Mar 13 13:10:06.038181 master-0 kubenswrapper[28149]: I0313 13:10:06.036548 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5qn6j-config-9wwt8" event={"ID":"b39d916f-ac1c-445c-8f64-43f28c477d65","Type":"ContainerDied","Data":"2f0ab736eae25f82c43a5d9e5e54f1c5c9b7bb1b519213efb5be7a4963b6b941"} Mar 13 13:10:06.054799 master-0 kubenswrapper[28149]: I0313 13:10:06.044566 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bv6nc" event={"ID":"95064747-74f7-4ab6-95a6-677c5e5d8be2","Type":"ContainerDied","Data":"ccf46888836d4d6b32f3530f978290a722fc565314b30f27af00e40c1d5077a2"} Mar 13 13:10:06.054799 master-0 kubenswrapper[28149]: I0313 13:10:06.044616 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccf46888836d4d6b32f3530f978290a722fc565314b30f27af00e40c1d5077a2" Mar 13 13:10:06.054799 master-0 kubenswrapper[28149]: I0313 13:10:06.044734 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bv6nc" Mar 13 13:10:06.066187 master-0 kubenswrapper[28149]: I0313 13:10:06.066101 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95064747-74f7-4ab6-95a6-677c5e5d8be2" (UID: "95064747-74f7-4ab6-95a6-677c5e5d8be2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:06.074303 master-0 kubenswrapper[28149]: I0313 13:10:06.072922 28149 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-swiftconf\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:06.074303 master-0 kubenswrapper[28149]: I0313 13:10:06.072965 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95064747-74f7-4ab6-95a6-677c5e5d8be2-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:06.074303 master-0 kubenswrapper[28149]: I0313 13:10:06.072979 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95064747-74f7-4ab6-95a6-677c5e5d8be2-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:07.496188 master-0 kubenswrapper[28149]: I0313 13:10:07.496109 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:07.568046 master-0 kubenswrapper[28149]: I0313 13:10:07.567995 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd295\" (UniqueName: \"kubernetes.io/projected/b39d916f-ac1c-445c-8f64-43f28c477d65-kube-api-access-qd295\") pod \"b39d916f-ac1c-445c-8f64-43f28c477d65\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " Mar 13 13:10:07.568298 master-0 kubenswrapper[28149]: I0313 13:10:07.568253 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b39d916f-ac1c-445c-8f64-43f28c477d65-scripts\") pod \"b39d916f-ac1c-445c-8f64-43f28c477d65\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " Mar 13 13:10:07.568298 master-0 kubenswrapper[28149]: I0313 13:10:07.568292 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-run\") pod \"b39d916f-ac1c-445c-8f64-43f28c477d65\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " Mar 13 13:10:07.568391 master-0 kubenswrapper[28149]: I0313 13:10:07.568331 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-log-ovn\") pod \"b39d916f-ac1c-445c-8f64-43f28c477d65\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " Mar 13 13:10:07.568391 master-0 kubenswrapper[28149]: I0313 13:10:07.568378 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-run-ovn\") pod \"b39d916f-ac1c-445c-8f64-43f28c477d65\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " Mar 13 13:10:07.568481 master-0 kubenswrapper[28149]: I0313 13:10:07.568404 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-run" (OuterVolumeSpecName: "var-run") pod "b39d916f-ac1c-445c-8f64-43f28c477d65" (UID: "b39d916f-ac1c-445c-8f64-43f28c477d65"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:10:07.568481 master-0 kubenswrapper[28149]: I0313 13:10:07.568466 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b39d916f-ac1c-445c-8f64-43f28c477d65-additional-scripts\") pod \"b39d916f-ac1c-445c-8f64-43f28c477d65\" (UID: \"b39d916f-ac1c-445c-8f64-43f28c477d65\") " Mar 13 13:10:07.568481 master-0 kubenswrapper[28149]: I0313 13:10:07.568474 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b39d916f-ac1c-445c-8f64-43f28c477d65" (UID: "b39d916f-ac1c-445c-8f64-43f28c477d65"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:10:07.568617 master-0 kubenswrapper[28149]: I0313 13:10:07.568499 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b39d916f-ac1c-445c-8f64-43f28c477d65" (UID: "b39d916f-ac1c-445c-8f64-43f28c477d65"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:10:07.569152 master-0 kubenswrapper[28149]: I0313 13:10:07.569113 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39d916f-ac1c-445c-8f64-43f28c477d65-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b39d916f-ac1c-445c-8f64-43f28c477d65" (UID: "b39d916f-ac1c-445c-8f64-43f28c477d65"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:07.569371 master-0 kubenswrapper[28149]: I0313 13:10:07.569344 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39d916f-ac1c-445c-8f64-43f28c477d65-scripts" (OuterVolumeSpecName: "scripts") pod "b39d916f-ac1c-445c-8f64-43f28c477d65" (UID: "b39d916f-ac1c-445c-8f64-43f28c477d65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:07.569433 master-0 kubenswrapper[28149]: I0313 13:10:07.569416 28149 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b39d916f-ac1c-445c-8f64-43f28c477d65-additional-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:07.569467 master-0 kubenswrapper[28149]: I0313 13:10:07.569434 28149 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-run\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:07.569467 master-0 kubenswrapper[28149]: I0313 13:10:07.569444 28149 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:07.569467 master-0 kubenswrapper[28149]: I0313 13:10:07.569452 28149 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b39d916f-ac1c-445c-8f64-43f28c477d65-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:07.571347 master-0 kubenswrapper[28149]: I0313 13:10:07.571318 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b39d916f-ac1c-445c-8f64-43f28c477d65-kube-api-access-qd295" (OuterVolumeSpecName: "kube-api-access-qd295") pod "b39d916f-ac1c-445c-8f64-43f28c477d65" (UID: "b39d916f-ac1c-445c-8f64-43f28c477d65"). InnerVolumeSpecName "kube-api-access-qd295". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:07.671558 master-0 kubenswrapper[28149]: I0313 13:10:07.671433 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b39d916f-ac1c-445c-8f64-43f28c477d65-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:07.671558 master-0 kubenswrapper[28149]: I0313 13:10:07.671488 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd295\" (UniqueName: \"kubernetes.io/projected/b39d916f-ac1c-445c-8f64-43f28c477d65-kube-api-access-qd295\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:07.796005 master-0 kubenswrapper[28149]: I0313 13:10:07.795863 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-5qn6j" Mar 13 13:10:08.071860 master-0 kubenswrapper[28149]: I0313 13:10:08.071776 28149 generic.go:334] "Generic (PLEG): container finished" podID="668a51dd-c5b3-4531-b707-39a00bfb5eef" containerID="3537431fdd0e9fbf22bdf2fce2db899caa7332b4f0726076d9cc41f488fe8ee8" exitCode=0 Mar 13 13:10:08.072149 master-0 kubenswrapper[28149]: I0313 13:10:08.071962 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"668a51dd-c5b3-4531-b707-39a00bfb5eef","Type":"ContainerDied","Data":"3537431fdd0e9fbf22bdf2fce2db899caa7332b4f0726076d9cc41f488fe8ee8"} Mar 13 13:10:08.366429 master-0 kubenswrapper[28149]: I0313 13:10:08.365358 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5qn6j-config-9wwt8" event={"ID":"b39d916f-ac1c-445c-8f64-43f28c477d65","Type":"ContainerDied","Data":"09b4308e48babdbadb9682e6d7aa8dcf95cab6b8453869ba1709e3d4d05ac165"} Mar 13 13:10:08.366429 master-0 kubenswrapper[28149]: I0313 13:10:08.365407 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09b4308e48babdbadb9682e6d7aa8dcf95cab6b8453869ba1709e3d4d05ac165" Mar 13 13:10:08.366429 master-0 kubenswrapper[28149]: I0313 13:10:08.365535 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qn6j-config-9wwt8" Mar 13 13:10:08.884778 master-0 kubenswrapper[28149]: I0313 13:10:08.884670 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5qn6j-config-9wwt8"] Mar 13 13:10:08.898663 master-0 kubenswrapper[28149]: I0313 13:10:08.898112 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5qn6j-config-9wwt8"] Mar 13 13:10:09.009919 master-0 kubenswrapper[28149]: I0313 13:10:09.008883 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5qn6j-config-wrxhn"] Mar 13 13:10:09.009919 master-0 kubenswrapper[28149]: E0313 13:10:09.009759 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39d916f-ac1c-445c-8f64-43f28c477d65" containerName="ovn-config" Mar 13 13:10:09.009919 master-0 kubenswrapper[28149]: I0313 13:10:09.009784 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39d916f-ac1c-445c-8f64-43f28c477d65" containerName="ovn-config" Mar 13 13:10:09.009919 master-0 kubenswrapper[28149]: E0313 13:10:09.009804 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95064747-74f7-4ab6-95a6-677c5e5d8be2" containerName="swift-ring-rebalance" Mar 13 13:10:09.009919 master-0 kubenswrapper[28149]: I0313 13:10:09.009810 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="95064747-74f7-4ab6-95a6-677c5e5d8be2" containerName="swift-ring-rebalance" Mar 13 13:10:09.010345 master-0 kubenswrapper[28149]: I0313 13:10:09.010149 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="95064747-74f7-4ab6-95a6-677c5e5d8be2" containerName="swift-ring-rebalance" Mar 13 13:10:09.010345 master-0 kubenswrapper[28149]: I0313 13:10:09.010180 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="b39d916f-ac1c-445c-8f64-43f28c477d65" containerName="ovn-config" Mar 13 13:10:09.011403 master-0 kubenswrapper[28149]: I0313 13:10:09.011015 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.014336 master-0 kubenswrapper[28149]: I0313 13:10:09.014305 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 13 13:10:09.036926 master-0 kubenswrapper[28149]: I0313 13:10:09.036535 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5qn6j-config-wrxhn"] Mar 13 13:10:09.157602 master-0 kubenswrapper[28149]: I0313 13:10:09.157403 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-run\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.157894 master-0 kubenswrapper[28149]: I0313 13:10:09.157608 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqzwp\" (UniqueName: \"kubernetes.io/projected/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-kube-api-access-mqzwp\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.157894 master-0 kubenswrapper[28149]: I0313 13:10:09.157653 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-run-ovn\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.157894 master-0 kubenswrapper[28149]: I0313 13:10:09.157699 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-log-ovn\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.157894 master-0 kubenswrapper[28149]: I0313 13:10:09.157784 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-scripts\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.157894 master-0 kubenswrapper[28149]: I0313 13:10:09.157850 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-additional-scripts\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.846336 master-0 kubenswrapper[28149]: I0313 13:10:09.846279 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-run\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.847511 master-0 kubenswrapper[28149]: I0313 13:10:09.846445 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqzwp\" (UniqueName: \"kubernetes.io/projected/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-kube-api-access-mqzwp\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.847511 master-0 kubenswrapper[28149]: I0313 13:10:09.846467 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-run-ovn\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.847511 master-0 kubenswrapper[28149]: I0313 13:10:09.846490 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-log-ovn\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.847511 master-0 kubenswrapper[28149]: I0313 13:10:09.846552 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-scripts\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.847511 master-0 kubenswrapper[28149]: I0313 13:10:09.846598 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-additional-scripts\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.847511 master-0 kubenswrapper[28149]: I0313 13:10:09.846913 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-run-ovn\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.847511 master-0 kubenswrapper[28149]: I0313 13:10:09.847395 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-additional-scripts\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.857357 master-0 kubenswrapper[28149]: I0313 13:10:09.851199 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-run\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.857357 master-0 kubenswrapper[28149]: I0313 13:10:09.851441 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-log-ovn\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.898034 master-0 kubenswrapper[28149]: I0313 13:10:09.894812 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqzwp\" (UniqueName: \"kubernetes.io/projected/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-kube-api-access-mqzwp\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.898034 master-0 kubenswrapper[28149]: I0313 13:10:09.897621 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-scripts\") pod \"ovn-controller-5qn6j-config-wrxhn\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.943414 master-0 kubenswrapper[28149]: I0313 13:10:09.943359 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:09.944911 master-0 kubenswrapper[28149]: I0313 13:10:09.944099 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"668a51dd-c5b3-4531-b707-39a00bfb5eef","Type":"ContainerStarted","Data":"e0935b3876242be1090bf9e9157c2f2b871ef4373096dfeec16f3088bebb885b"} Mar 13 13:10:09.946130 master-0 kubenswrapper[28149]: I0313 13:10:09.945877 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:10:10.001167 master-0 kubenswrapper[28149]: I0313 13:10:09.998554 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=56.988406069 podStartE2EDuration="1m19.998523297s" podCreationTimestamp="2026-03-13 13:08:50 +0000 UTC" firstStartedPulling="2026-03-13 13:09:00.211922383 +0000 UTC m=+913.865387542" lastFinishedPulling="2026-03-13 13:09:23.222039611 +0000 UTC m=+936.875504770" observedRunningTime="2026-03-13 13:10:09.981527141 +0000 UTC m=+983.634992320" watchObservedRunningTime="2026-03-13 13:10:09.998523297 +0000 UTC m=+983.651988456" Mar 13 13:10:10.714546 master-0 kubenswrapper[28149]: I0313 13:10:10.713676 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b39d916f-ac1c-445c-8f64-43f28c477d65" path="/var/lib/kubelet/pods/b39d916f-ac1c-445c-8f64-43f28c477d65/volumes" Mar 13 13:10:11.097793 master-0 kubenswrapper[28149]: I0313 13:10:11.097700 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5qn6j-config-wrxhn"] Mar 13 13:10:11.772529 master-0 kubenswrapper[28149]: I0313 13:10:11.772468 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:10:11.781889 master-0 kubenswrapper[28149]: I0313 13:10:11.781837 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0e1ffcf0-0cdc-4a69-884c-47edbe0caf50-etc-swift\") pod \"swift-storage-0\" (UID: \"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50\") " pod="openstack/swift-storage-0" Mar 13 13:10:12.055390 master-0 kubenswrapper[28149]: I0313 13:10:12.055180 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 13 13:10:12.124717 master-0 kubenswrapper[28149]: I0313 13:10:12.123754 28149 generic.go:334] "Generic (PLEG): container finished" podID="99af3e6a-1bb0-4849-80b8-f20eeb35a3e4" containerID="6549c3175765d42b7e813efabb0a1f0603ba4f1d4615804beb48b2ef2bc7accb" exitCode=0 Mar 13 13:10:12.124717 master-0 kubenswrapper[28149]: I0313 13:10:12.123834 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5qn6j-config-wrxhn" event={"ID":"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4","Type":"ContainerDied","Data":"6549c3175765d42b7e813efabb0a1f0603ba4f1d4615804beb48b2ef2bc7accb"} Mar 13 13:10:12.124717 master-0 kubenswrapper[28149]: I0313 13:10:12.123878 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5qn6j-config-wrxhn" event={"ID":"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4","Type":"ContainerStarted","Data":"de869badefa05bcdf158b88d227d2dae1b209886bc1c8c8c00e3d6bcb4d89a29"} Mar 13 13:10:13.560473 master-0 kubenswrapper[28149]: W0313 13:10:13.560408 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e1ffcf0_0cdc_4a69_884c_47edbe0caf50.slice/crio-f8a9256b259e0d9b36d4b1d1257d24c77d46048734c60a3c8e51d0269c22a223 WatchSource:0}: Error finding container f8a9256b259e0d9b36d4b1d1257d24c77d46048734c60a3c8e51d0269c22a223: Status 404 returned error can't find the container with id f8a9256b259e0d9b36d4b1d1257d24c77d46048734c60a3c8e51d0269c22a223 Mar 13 13:10:13.566100 master-0 kubenswrapper[28149]: I0313 13:10:13.564736 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 13 13:10:14.131072 master-0 kubenswrapper[28149]: I0313 13:10:14.131010 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:14.659554 master-0 kubenswrapper[28149]: I0313 13:10:14.659497 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qn6j-config-wrxhn" Mar 13 13:10:14.659554 master-0 kubenswrapper[28149]: I0313 13:10:14.659513 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5qn6j-config-wrxhn" event={"ID":"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4","Type":"ContainerDied","Data":"de869badefa05bcdf158b88d227d2dae1b209886bc1c8c8c00e3d6bcb4d89a29"} Mar 13 13:10:14.660092 master-0 kubenswrapper[28149]: I0313 13:10:14.659566 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de869badefa05bcdf158b88d227d2dae1b209886bc1c8c8c00e3d6bcb4d89a29" Mar 13 13:10:14.662734 master-0 kubenswrapper[28149]: I0313 13:10:14.662633 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"f8a9256b259e0d9b36d4b1d1257d24c77d46048734c60a3c8e51d0269c22a223"} Mar 13 13:10:14.734591 master-0 kubenswrapper[28149]: I0313 13:10:14.734494 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-log-ovn\") pod \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " Mar 13 13:10:14.734896 master-0 kubenswrapper[28149]: I0313 13:10:14.734624 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqzwp\" (UniqueName: \"kubernetes.io/projected/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-kube-api-access-mqzwp\") pod \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " Mar 13 13:10:14.734896 master-0 kubenswrapper[28149]: I0313 13:10:14.734699 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-run\") pod \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " Mar 13 13:10:14.734896 master-0 kubenswrapper[28149]: I0313 13:10:14.734735 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-additional-scripts\") pod \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " Mar 13 13:10:14.734896 master-0 kubenswrapper[28149]: I0313 13:10:14.734869 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-scripts\") pod \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " Mar 13 13:10:14.735148 master-0 kubenswrapper[28149]: I0313 13:10:14.735013 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-run-ovn\") pod \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\" (UID: \"99af3e6a-1bb0-4849-80b8-f20eeb35a3e4\") " Mar 13 13:10:14.736624 master-0 kubenswrapper[28149]: I0313 13:10:14.736572 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "99af3e6a-1bb0-4849-80b8-f20eeb35a3e4" (UID: "99af3e6a-1bb0-4849-80b8-f20eeb35a3e4"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:10:14.739707 master-0 kubenswrapper[28149]: I0313 13:10:14.739620 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-run" (OuterVolumeSpecName: "var-run") pod "99af3e6a-1bb0-4849-80b8-f20eeb35a3e4" (UID: "99af3e6a-1bb0-4849-80b8-f20eeb35a3e4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:10:14.739822 master-0 kubenswrapper[28149]: I0313 13:10:14.739754 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "99af3e6a-1bb0-4849-80b8-f20eeb35a3e4" (UID: "99af3e6a-1bb0-4849-80b8-f20eeb35a3e4"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:10:14.741728 master-0 kubenswrapper[28149]: I0313 13:10:14.741604 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "99af3e6a-1bb0-4849-80b8-f20eeb35a3e4" (UID: "99af3e6a-1bb0-4849-80b8-f20eeb35a3e4"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:14.742285 master-0 kubenswrapper[28149]: I0313 13:10:14.742191 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-scripts" (OuterVolumeSpecName: "scripts") pod "99af3e6a-1bb0-4849-80b8-f20eeb35a3e4" (UID: "99af3e6a-1bb0-4849-80b8-f20eeb35a3e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:14.743209 master-0 kubenswrapper[28149]: I0313 13:10:14.742803 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-kube-api-access-mqzwp" (OuterVolumeSpecName: "kube-api-access-mqzwp") pod "99af3e6a-1bb0-4849-80b8-f20eeb35a3e4" (UID: "99af3e6a-1bb0-4849-80b8-f20eeb35a3e4"). InnerVolumeSpecName "kube-api-access-mqzwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:14.838577 master-0 kubenswrapper[28149]: I0313 13:10:14.838434 28149 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:14.838577 master-0 kubenswrapper[28149]: I0313 13:10:14.838537 28149 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:14.838577 master-0 kubenswrapper[28149]: I0313 13:10:14.838553 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqzwp\" (UniqueName: \"kubernetes.io/projected/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-kube-api-access-mqzwp\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:14.838965 master-0 kubenswrapper[28149]: I0313 13:10:14.838631 28149 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-var-run\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:14.838965 master-0 kubenswrapper[28149]: I0313 13:10:14.838645 28149 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-additional-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:14.838965 master-0 kubenswrapper[28149]: I0313 13:10:14.838655 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:15.797943 master-0 kubenswrapper[28149]: I0313 13:10:15.797345 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5qn6j-config-wrxhn"] Mar 13 13:10:15.811972 master-0 kubenswrapper[28149]: I0313 13:10:15.809228 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5qn6j-config-wrxhn"] Mar 13 13:10:16.733422 master-0 kubenswrapper[28149]: I0313 13:10:16.732043 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99af3e6a-1bb0-4849-80b8-f20eeb35a3e4" path="/var/lib/kubelet/pods/99af3e6a-1bb0-4849-80b8-f20eeb35a3e4/volumes" Mar 13 13:10:17.336869 master-0 kubenswrapper[28149]: I0313 13:10:17.336702 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 13 13:10:17.820167 master-0 kubenswrapper[28149]: I0313 13:10:17.820088 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-h89wz"] Mar 13 13:10:17.820834 master-0 kubenswrapper[28149]: E0313 13:10:17.820639 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99af3e6a-1bb0-4849-80b8-f20eeb35a3e4" containerName="ovn-config" Mar 13 13:10:17.820834 master-0 kubenswrapper[28149]: I0313 13:10:17.820661 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="99af3e6a-1bb0-4849-80b8-f20eeb35a3e4" containerName="ovn-config" Mar 13 13:10:17.820993 master-0 kubenswrapper[28149]: I0313 13:10:17.820915 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="99af3e6a-1bb0-4849-80b8-f20eeb35a3e4" containerName="ovn-config" Mar 13 13:10:17.822331 master-0 kubenswrapper[28149]: I0313 13:10:17.821700 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h89wz" Mar 13 13:10:17.889332 master-0 kubenswrapper[28149]: I0313 13:10:17.889184 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h89wz"] Mar 13 13:10:17.926683 master-0 kubenswrapper[28149]: I0313 13:10:17.925589 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f589\" (UniqueName: \"kubernetes.io/projected/170af500-fab8-49d0-83fb-16fa86431761-kube-api-access-8f589\") pod \"cinder-db-create-h89wz\" (UID: \"170af500-fab8-49d0-83fb-16fa86431761\") " pod="openstack/cinder-db-create-h89wz" Mar 13 13:10:17.926683 master-0 kubenswrapper[28149]: I0313 13:10:17.925699 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170af500-fab8-49d0-83fb-16fa86431761-operator-scripts\") pod \"cinder-db-create-h89wz\" (UID: \"170af500-fab8-49d0-83fb-16fa86431761\") " pod="openstack/cinder-db-create-h89wz" Mar 13 13:10:18.027884 master-0 kubenswrapper[28149]: I0313 13:10:18.027804 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f589\" (UniqueName: \"kubernetes.io/projected/170af500-fab8-49d0-83fb-16fa86431761-kube-api-access-8f589\") pod \"cinder-db-create-h89wz\" (UID: \"170af500-fab8-49d0-83fb-16fa86431761\") " pod="openstack/cinder-db-create-h89wz" Mar 13 13:10:18.028243 master-0 kubenswrapper[28149]: I0313 13:10:18.027934 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170af500-fab8-49d0-83fb-16fa86431761-operator-scripts\") pod \"cinder-db-create-h89wz\" (UID: \"170af500-fab8-49d0-83fb-16fa86431761\") " pod="openstack/cinder-db-create-h89wz" Mar 13 13:10:18.028806 master-0 kubenswrapper[28149]: I0313 13:10:18.028762 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170af500-fab8-49d0-83fb-16fa86431761-operator-scripts\") pod \"cinder-db-create-h89wz\" (UID: \"170af500-fab8-49d0-83fb-16fa86431761\") " pod="openstack/cinder-db-create-h89wz" Mar 13 13:10:18.112456 master-0 kubenswrapper[28149]: I0313 13:10:18.112397 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f589\" (UniqueName: \"kubernetes.io/projected/170af500-fab8-49d0-83fb-16fa86431761-kube-api-access-8f589\") pod \"cinder-db-create-h89wz\" (UID: \"170af500-fab8-49d0-83fb-16fa86431761\") " pod="openstack/cinder-db-create-h89wz" Mar 13 13:10:18.173400 master-0 kubenswrapper[28149]: I0313 13:10:18.173279 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h89wz" Mar 13 13:10:18.337428 master-0 kubenswrapper[28149]: I0313 13:10:18.324253 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-85ac-account-create-update-x79xz"] Mar 13 13:10:18.337428 master-0 kubenswrapper[28149]: I0313 13:10:18.326019 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-85ac-account-create-update-x79xz" Mar 13 13:10:18.337428 master-0 kubenswrapper[28149]: I0313 13:10:18.329497 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 13 13:10:18.345364 master-0 kubenswrapper[28149]: I0313 13:10:18.345290 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-85ac-account-create-update-x79xz"] Mar 13 13:10:18.436385 master-0 kubenswrapper[28149]: I0313 13:10:18.436239 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdgzh\" (UniqueName: \"kubernetes.io/projected/fac20de4-3e4b-4934-b153-aff181b435de-kube-api-access-kdgzh\") pod \"cinder-85ac-account-create-update-x79xz\" (UID: \"fac20de4-3e4b-4934-b153-aff181b435de\") " pod="openstack/cinder-85ac-account-create-update-x79xz" Mar 13 13:10:18.436385 master-0 kubenswrapper[28149]: I0313 13:10:18.436352 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac20de4-3e4b-4934-b153-aff181b435de-operator-scripts\") pod \"cinder-85ac-account-create-update-x79xz\" (UID: \"fac20de4-3e4b-4934-b153-aff181b435de\") " pod="openstack/cinder-85ac-account-create-update-x79xz" Mar 13 13:10:18.451725 master-0 kubenswrapper[28149]: I0313 13:10:18.450726 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-b4zxh"] Mar 13 13:10:18.477376 master-0 kubenswrapper[28149]: I0313 13:10:18.469848 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-b4zxh"] Mar 13 13:10:18.477376 master-0 kubenswrapper[28149]: I0313 13:10:18.470030 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b4zxh" Mar 13 13:10:18.541156 master-0 kubenswrapper[28149]: I0313 13:10:18.538770 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdgzh\" (UniqueName: \"kubernetes.io/projected/fac20de4-3e4b-4934-b153-aff181b435de-kube-api-access-kdgzh\") pod \"cinder-85ac-account-create-update-x79xz\" (UID: \"fac20de4-3e4b-4934-b153-aff181b435de\") " pod="openstack/cinder-85ac-account-create-update-x79xz" Mar 13 13:10:18.541156 master-0 kubenswrapper[28149]: I0313 13:10:18.538869 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac20de4-3e4b-4934-b153-aff181b435de-operator-scripts\") pod \"cinder-85ac-account-create-update-x79xz\" (UID: \"fac20de4-3e4b-4934-b153-aff181b435de\") " pod="openstack/cinder-85ac-account-create-update-x79xz" Mar 13 13:10:18.541156 master-0 kubenswrapper[28149]: I0313 13:10:18.539738 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac20de4-3e4b-4934-b153-aff181b435de-operator-scripts\") pod \"cinder-85ac-account-create-update-x79xz\" (UID: \"fac20de4-3e4b-4934-b153-aff181b435de\") " pod="openstack/cinder-85ac-account-create-update-x79xz" Mar 13 13:10:18.579602 master-0 kubenswrapper[28149]: I0313 13:10:18.579555 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdgzh\" (UniqueName: \"kubernetes.io/projected/fac20de4-3e4b-4934-b153-aff181b435de-kube-api-access-kdgzh\") pod \"cinder-85ac-account-create-update-x79xz\" (UID: \"fac20de4-3e4b-4934-b153-aff181b435de\") " pod="openstack/cinder-85ac-account-create-update-x79xz" Mar 13 13:10:18.596696 master-0 kubenswrapper[28149]: I0313 13:10:18.596601 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-1148-account-create-update-ggwcs"] Mar 13 13:10:18.606569 master-0 kubenswrapper[28149]: I0313 13:10:18.598638 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1148-account-create-update-ggwcs" Mar 13 13:10:18.609469 master-0 kubenswrapper[28149]: I0313 13:10:18.606806 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 13 13:10:18.612460 master-0 kubenswrapper[28149]: I0313 13:10:18.612337 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-nxvs5"] Mar 13 13:10:18.619182 master-0 kubenswrapper[28149]: I0313 13:10:18.614274 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nxvs5" Mar 13 13:10:18.619182 master-0 kubenswrapper[28149]: I0313 13:10:18.616194 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 13:10:18.619182 master-0 kubenswrapper[28149]: I0313 13:10:18.616545 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 13:10:18.619182 master-0 kubenswrapper[28149]: I0313 13:10:18.616660 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 13:10:18.635986 master-0 kubenswrapper[28149]: I0313 13:10:18.628239 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1148-account-create-update-ggwcs"] Mar 13 13:10:18.646530 master-0 kubenswrapper[28149]: I0313 13:10:18.639661 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-nxvs5"] Mar 13 13:10:18.646530 master-0 kubenswrapper[28149]: I0313 13:10:18.640599 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfpdp\" (UniqueName: \"kubernetes.io/projected/459c48e6-39bc-4241-9810-a203d2cde587-kube-api-access-bfpdp\") pod \"neutron-db-create-b4zxh\" (UID: \"459c48e6-39bc-4241-9810-a203d2cde587\") " pod="openstack/neutron-db-create-b4zxh" Mar 13 13:10:18.646530 master-0 kubenswrapper[28149]: I0313 13:10:18.640698 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/459c48e6-39bc-4241-9810-a203d2cde587-operator-scripts\") pod \"neutron-db-create-b4zxh\" (UID: \"459c48e6-39bc-4241-9810-a203d2cde587\") " pod="openstack/neutron-db-create-b4zxh" Mar 13 13:10:18.664679 master-0 kubenswrapper[28149]: I0313 13:10:18.660178 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-85ac-account-create-update-x79xz" Mar 13 13:10:18.744013 master-0 kubenswrapper[28149]: I0313 13:10:18.743805 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfpdp\" (UniqueName: \"kubernetes.io/projected/459c48e6-39bc-4241-9810-a203d2cde587-kube-api-access-bfpdp\") pod \"neutron-db-create-b4zxh\" (UID: \"459c48e6-39bc-4241-9810-a203d2cde587\") " pod="openstack/neutron-db-create-b4zxh" Mar 13 13:10:18.744013 master-0 kubenswrapper[28149]: I0313 13:10:18.743919 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab18fda5-1cb5-4875-9daf-045d6e20138e-combined-ca-bundle\") pod \"keystone-db-sync-nxvs5\" (UID: \"ab18fda5-1cb5-4875-9daf-045d6e20138e\") " pod="openstack/keystone-db-sync-nxvs5" Mar 13 13:10:18.744913 master-0 kubenswrapper[28149]: I0313 13:10:18.744306 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2927z\" (UniqueName: \"kubernetes.io/projected/bb821b74-acb1-49cc-8240-9eb3e2626153-kube-api-access-2927z\") pod \"neutron-1148-account-create-update-ggwcs\" (UID: \"bb821b74-acb1-49cc-8240-9eb3e2626153\") " pod="openstack/neutron-1148-account-create-update-ggwcs" Mar 13 13:10:18.744913 master-0 kubenswrapper[28149]: I0313 13:10:18.744538 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/459c48e6-39bc-4241-9810-a203d2cde587-operator-scripts\") pod \"neutron-db-create-b4zxh\" (UID: \"459c48e6-39bc-4241-9810-a203d2cde587\") " pod="openstack/neutron-db-create-b4zxh" Mar 13 13:10:18.744913 master-0 kubenswrapper[28149]: I0313 13:10:18.744880 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab18fda5-1cb5-4875-9daf-045d6e20138e-config-data\") pod \"keystone-db-sync-nxvs5\" (UID: \"ab18fda5-1cb5-4875-9daf-045d6e20138e\") " pod="openstack/keystone-db-sync-nxvs5" Mar 13 13:10:18.749026 master-0 kubenswrapper[28149]: I0313 13:10:18.745158 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb821b74-acb1-49cc-8240-9eb3e2626153-operator-scripts\") pod \"neutron-1148-account-create-update-ggwcs\" (UID: \"bb821b74-acb1-49cc-8240-9eb3e2626153\") " pod="openstack/neutron-1148-account-create-update-ggwcs" Mar 13 13:10:18.749026 master-0 kubenswrapper[28149]: I0313 13:10:18.745254 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tqvc\" (UniqueName: \"kubernetes.io/projected/ab18fda5-1cb5-4875-9daf-045d6e20138e-kube-api-access-7tqvc\") pod \"keystone-db-sync-nxvs5\" (UID: \"ab18fda5-1cb5-4875-9daf-045d6e20138e\") " pod="openstack/keystone-db-sync-nxvs5" Mar 13 13:10:18.749026 master-0 kubenswrapper[28149]: I0313 13:10:18.745610 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/459c48e6-39bc-4241-9810-a203d2cde587-operator-scripts\") pod \"neutron-db-create-b4zxh\" (UID: \"459c48e6-39bc-4241-9810-a203d2cde587\") " pod="openstack/neutron-db-create-b4zxh" Mar 13 13:10:18.767441 master-0 kubenswrapper[28149]: I0313 13:10:18.767360 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfpdp\" (UniqueName: \"kubernetes.io/projected/459c48e6-39bc-4241-9810-a203d2cde587-kube-api-access-bfpdp\") pod \"neutron-db-create-b4zxh\" (UID: \"459c48e6-39bc-4241-9810-a203d2cde587\") " pod="openstack/neutron-db-create-b4zxh" Mar 13 13:10:18.787673 master-0 kubenswrapper[28149]: I0313 13:10:18.787608 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b4zxh" Mar 13 13:10:18.849833 master-0 kubenswrapper[28149]: I0313 13:10:18.849761 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab18fda5-1cb5-4875-9daf-045d6e20138e-config-data\") pod \"keystone-db-sync-nxvs5\" (UID: \"ab18fda5-1cb5-4875-9daf-045d6e20138e\") " pod="openstack/keystone-db-sync-nxvs5" Mar 13 13:10:18.850090 master-0 kubenswrapper[28149]: I0313 13:10:18.849910 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb821b74-acb1-49cc-8240-9eb3e2626153-operator-scripts\") pod \"neutron-1148-account-create-update-ggwcs\" (UID: \"bb821b74-acb1-49cc-8240-9eb3e2626153\") " pod="openstack/neutron-1148-account-create-update-ggwcs" Mar 13 13:10:18.850090 master-0 kubenswrapper[28149]: I0313 13:10:18.849974 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tqvc\" (UniqueName: \"kubernetes.io/projected/ab18fda5-1cb5-4875-9daf-045d6e20138e-kube-api-access-7tqvc\") pod \"keystone-db-sync-nxvs5\" (UID: \"ab18fda5-1cb5-4875-9daf-045d6e20138e\") " pod="openstack/keystone-db-sync-nxvs5" Mar 13 13:10:18.850090 master-0 kubenswrapper[28149]: I0313 13:10:18.850082 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab18fda5-1cb5-4875-9daf-045d6e20138e-combined-ca-bundle\") pod \"keystone-db-sync-nxvs5\" (UID: \"ab18fda5-1cb5-4875-9daf-045d6e20138e\") " pod="openstack/keystone-db-sync-nxvs5" Mar 13 13:10:18.850472 master-0 kubenswrapper[28149]: I0313 13:10:18.850446 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2927z\" (UniqueName: \"kubernetes.io/projected/bb821b74-acb1-49cc-8240-9eb3e2626153-kube-api-access-2927z\") pod \"neutron-1148-account-create-update-ggwcs\" (UID: \"bb821b74-acb1-49cc-8240-9eb3e2626153\") " pod="openstack/neutron-1148-account-create-update-ggwcs" Mar 13 13:10:18.850963 master-0 kubenswrapper[28149]: I0313 13:10:18.850915 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb821b74-acb1-49cc-8240-9eb3e2626153-operator-scripts\") pod \"neutron-1148-account-create-update-ggwcs\" (UID: \"bb821b74-acb1-49cc-8240-9eb3e2626153\") " pod="openstack/neutron-1148-account-create-update-ggwcs" Mar 13 13:10:18.854728 master-0 kubenswrapper[28149]: I0313 13:10:18.854667 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab18fda5-1cb5-4875-9daf-045d6e20138e-config-data\") pod \"keystone-db-sync-nxvs5\" (UID: \"ab18fda5-1cb5-4875-9daf-045d6e20138e\") " pod="openstack/keystone-db-sync-nxvs5" Mar 13 13:10:18.857200 master-0 kubenswrapper[28149]: I0313 13:10:18.857129 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab18fda5-1cb5-4875-9daf-045d6e20138e-combined-ca-bundle\") pod \"keystone-db-sync-nxvs5\" (UID: \"ab18fda5-1cb5-4875-9daf-045d6e20138e\") " pod="openstack/keystone-db-sync-nxvs5" Mar 13 13:10:18.871708 master-0 kubenswrapper[28149]: I0313 13:10:18.871638 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tqvc\" (UniqueName: \"kubernetes.io/projected/ab18fda5-1cb5-4875-9daf-045d6e20138e-kube-api-access-7tqvc\") pod \"keystone-db-sync-nxvs5\" (UID: \"ab18fda5-1cb5-4875-9daf-045d6e20138e\") " pod="openstack/keystone-db-sync-nxvs5" Mar 13 13:10:18.871917 master-0 kubenswrapper[28149]: I0313 13:10:18.871751 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2927z\" (UniqueName: \"kubernetes.io/projected/bb821b74-acb1-49cc-8240-9eb3e2626153-kube-api-access-2927z\") pod \"neutron-1148-account-create-update-ggwcs\" (UID: \"bb821b74-acb1-49cc-8240-9eb3e2626153\") " pod="openstack/neutron-1148-account-create-update-ggwcs" Mar 13 13:10:18.976764 master-0 kubenswrapper[28149]: I0313 13:10:18.976689 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1148-account-create-update-ggwcs" Mar 13 13:10:18.989187 master-0 kubenswrapper[28149]: I0313 13:10:18.988558 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nxvs5" Mar 13 13:10:19.197454 master-0 kubenswrapper[28149]: I0313 13:10:19.197369 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="668a51dd-c5b3-4531-b707-39a00bfb5eef" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.128.0.171:5671: connect: connection refused" Mar 13 13:10:26.519752 master-0 kubenswrapper[28149]: I0313 13:10:26.519356 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-b4zxh"] Mar 13 13:10:26.532415 master-0 kubenswrapper[28149]: I0313 13:10:26.530221 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h89wz"] Mar 13 13:10:26.615526 master-0 kubenswrapper[28149]: I0313 13:10:26.590960 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-85ac-account-create-update-x79xz"] Mar 13 13:10:26.648217 master-0 kubenswrapper[28149]: I0313 13:10:26.644565 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-nxvs5"] Mar 13 13:10:26.661086 master-0 kubenswrapper[28149]: I0313 13:10:26.661025 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1148-account-create-update-ggwcs"] Mar 13 13:10:26.872482 master-0 kubenswrapper[28149]: I0313 13:10:26.872431 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b4zxh" event={"ID":"459c48e6-39bc-4241-9810-a203d2cde587","Type":"ContainerStarted","Data":"da326dd4b0137575fd87bafd41b700048d2ae049480c5416ffc34cbda4ede00c"} Mar 13 13:10:26.873988 master-0 kubenswrapper[28149]: I0313 13:10:26.873947 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h89wz" event={"ID":"170af500-fab8-49d0-83fb-16fa86431761","Type":"ContainerStarted","Data":"fd780b6bb2cde3be7e5886b76b5bbe06435fdf74173164f6e7530802caeae622"} Mar 13 13:10:26.875363 master-0 kubenswrapper[28149]: I0313 13:10:26.875274 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-85ac-account-create-update-x79xz" event={"ID":"fac20de4-3e4b-4934-b153-aff181b435de","Type":"ContainerStarted","Data":"48deefb234845d15ed94250a29873e0e9e4db72cce0a68ea0062848c0a900987"} Mar 13 13:10:26.876783 master-0 kubenswrapper[28149]: I0313 13:10:26.876753 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tfhs6" event={"ID":"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a","Type":"ContainerStarted","Data":"16f54102812c62c2c278b6d4858912a7d8f697f03bf80a216128eaf4a1224da2"} Mar 13 13:10:26.877978 master-0 kubenswrapper[28149]: I0313 13:10:26.877944 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nxvs5" event={"ID":"ab18fda5-1cb5-4875-9daf-045d6e20138e","Type":"ContainerStarted","Data":"34f9234e67ab3d74ca60cd356760ae285c4f51fd1b6ca2929bd13015027697fb"} Mar 13 13:10:26.880079 master-0 kubenswrapper[28149]: I0313 13:10:26.880041 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"22741d24db675803329c08e38f86204be98fc82784f56d7c12caadbd65093b6b"} Mar 13 13:10:26.880079 master-0 kubenswrapper[28149]: I0313 13:10:26.880071 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"492b6f4f2af3c70c3e355a165f116d929e4a05001a63e6e14845e0a8aadd9526"} Mar 13 13:10:26.880286 master-0 kubenswrapper[28149]: I0313 13:10:26.880084 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"d5e07bbe0932769725039e22bac0b5cfa35a17f93fbbea874da8eb2aa040363b"} Mar 13 13:10:26.884097 master-0 kubenswrapper[28149]: I0313 13:10:26.884041 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1148-account-create-update-ggwcs" event={"ID":"bb821b74-acb1-49cc-8240-9eb3e2626153","Type":"ContainerStarted","Data":"33600cd3e5cd71e3c33554cd08cab688cb3943cfd9a3482578269164be428bce"} Mar 13 13:10:26.901458 master-0 kubenswrapper[28149]: I0313 13:10:26.900568 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-tfhs6" podStartSLOduration=2.895710738 podStartE2EDuration="25.900543791s" podCreationTimestamp="2026-03-13 13:10:01 +0000 UTC" firstStartedPulling="2026-03-13 13:10:02.781812666 +0000 UTC m=+976.435277825" lastFinishedPulling="2026-03-13 13:10:25.786645719 +0000 UTC m=+999.440110878" observedRunningTime="2026-03-13 13:10:26.89941165 +0000 UTC m=+1000.552876829" watchObservedRunningTime="2026-03-13 13:10:26.900543791 +0000 UTC m=+1000.554008950" Mar 13 13:10:27.901678 master-0 kubenswrapper[28149]: I0313 13:10:27.901538 28149 generic.go:334] "Generic (PLEG): container finished" podID="bb821b74-acb1-49cc-8240-9eb3e2626153" containerID="e017cff0d0bc1dfcd2583468be6fc59e574d9884d89fb023195db8041991dd74" exitCode=0 Mar 13 13:10:27.901678 master-0 kubenswrapper[28149]: I0313 13:10:27.901634 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1148-account-create-update-ggwcs" event={"ID":"bb821b74-acb1-49cc-8240-9eb3e2626153","Type":"ContainerDied","Data":"e017cff0d0bc1dfcd2583468be6fc59e574d9884d89fb023195db8041991dd74"} Mar 13 13:10:27.904730 master-0 kubenswrapper[28149]: I0313 13:10:27.904061 28149 generic.go:334] "Generic (PLEG): container finished" podID="459c48e6-39bc-4241-9810-a203d2cde587" containerID="1202be407b3751faf202a3aa9972be038f7dfde85b6a8542dc53e0b5eef9cd10" exitCode=0 Mar 13 13:10:27.904730 master-0 kubenswrapper[28149]: I0313 13:10:27.904131 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b4zxh" event={"ID":"459c48e6-39bc-4241-9810-a203d2cde587","Type":"ContainerDied","Data":"1202be407b3751faf202a3aa9972be038f7dfde85b6a8542dc53e0b5eef9cd10"} Mar 13 13:10:27.907593 master-0 kubenswrapper[28149]: I0313 13:10:27.907555 28149 generic.go:334] "Generic (PLEG): container finished" podID="170af500-fab8-49d0-83fb-16fa86431761" containerID="6f62b4c153f89e6c18888d51e34d0304564afcafd308de42e708d609bfba3cfc" exitCode=0 Mar 13 13:10:27.907802 master-0 kubenswrapper[28149]: I0313 13:10:27.907755 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h89wz" event={"ID":"170af500-fab8-49d0-83fb-16fa86431761","Type":"ContainerDied","Data":"6f62b4c153f89e6c18888d51e34d0304564afcafd308de42e708d609bfba3cfc"} Mar 13 13:10:27.917543 master-0 kubenswrapper[28149]: I0313 13:10:27.917491 28149 generic.go:334] "Generic (PLEG): container finished" podID="fac20de4-3e4b-4934-b153-aff181b435de" containerID="d416da7d71758e47f4d8a7e8ddd78e67f39f3ecfb1beaee2ba93600eb4ebf40d" exitCode=0 Mar 13 13:10:27.917763 master-0 kubenswrapper[28149]: I0313 13:10:27.917571 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-85ac-account-create-update-x79xz" event={"ID":"fac20de4-3e4b-4934-b153-aff181b435de","Type":"ContainerDied","Data":"d416da7d71758e47f4d8a7e8ddd78e67f39f3ecfb1beaee2ba93600eb4ebf40d"} Mar 13 13:10:27.943406 master-0 kubenswrapper[28149]: I0313 13:10:27.942825 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"deff4d361a383a26edbba3e58ed415cc1c3b14a429131fb669c07da6b9c0039e"} Mar 13 13:10:28.965166 master-0 kubenswrapper[28149]: I0313 13:10:28.964844 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"60638594e754a4b684dcd51f7bf8f7c286575b66274658703b5b6ab4232fc778"} Mar 13 13:10:28.965166 master-0 kubenswrapper[28149]: I0313 13:10:28.964916 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"7311e8aa61021ab87c519d10fce12bed2086d4c4d24398d2ebf3f09b31cd17e9"} Mar 13 13:10:29.198379 master-0 kubenswrapper[28149]: I0313 13:10:29.198317 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 13 13:10:29.982297 master-0 kubenswrapper[28149]: I0313 13:10:29.982079 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"8ab77e0eb30c6b31300391542dc1ebf83563c17ffd51418ad5ce45908af4cdd6"} Mar 13 13:10:32.384889 master-0 kubenswrapper[28149]: I0313 13:10:32.384844 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-85ac-account-create-update-x79xz" Mar 13 13:10:32.413619 master-0 kubenswrapper[28149]: I0313 13:10:32.413171 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h89wz" Mar 13 13:10:32.480593 master-0 kubenswrapper[28149]: I0313 13:10:32.479078 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1148-account-create-update-ggwcs" Mar 13 13:10:32.508098 master-0 kubenswrapper[28149]: I0313 13:10:32.489243 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac20de4-3e4b-4934-b153-aff181b435de-operator-scripts\") pod \"fac20de4-3e4b-4934-b153-aff181b435de\" (UID: \"fac20de4-3e4b-4934-b153-aff181b435de\") " Mar 13 13:10:32.508098 master-0 kubenswrapper[28149]: I0313 13:10:32.489518 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170af500-fab8-49d0-83fb-16fa86431761-operator-scripts\") pod \"170af500-fab8-49d0-83fb-16fa86431761\" (UID: \"170af500-fab8-49d0-83fb-16fa86431761\") " Mar 13 13:10:32.508098 master-0 kubenswrapper[28149]: I0313 13:10:32.489690 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdgzh\" (UniqueName: \"kubernetes.io/projected/fac20de4-3e4b-4934-b153-aff181b435de-kube-api-access-kdgzh\") pod \"fac20de4-3e4b-4934-b153-aff181b435de\" (UID: \"fac20de4-3e4b-4934-b153-aff181b435de\") " Mar 13 13:10:32.508098 master-0 kubenswrapper[28149]: I0313 13:10:32.489797 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f589\" (UniqueName: \"kubernetes.io/projected/170af500-fab8-49d0-83fb-16fa86431761-kube-api-access-8f589\") pod \"170af500-fab8-49d0-83fb-16fa86431761\" (UID: \"170af500-fab8-49d0-83fb-16fa86431761\") " Mar 13 13:10:32.508098 master-0 kubenswrapper[28149]: I0313 13:10:32.498244 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170af500-fab8-49d0-83fb-16fa86431761-kube-api-access-8f589" (OuterVolumeSpecName: "kube-api-access-8f589") pod "170af500-fab8-49d0-83fb-16fa86431761" (UID: "170af500-fab8-49d0-83fb-16fa86431761"). InnerVolumeSpecName "kube-api-access-8f589". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:32.508098 master-0 kubenswrapper[28149]: I0313 13:10:32.499252 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac20de4-3e4b-4934-b153-aff181b435de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fac20de4-3e4b-4934-b153-aff181b435de" (UID: "fac20de4-3e4b-4934-b153-aff181b435de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:32.512894 master-0 kubenswrapper[28149]: I0313 13:10:32.512764 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac20de4-3e4b-4934-b153-aff181b435de-kube-api-access-kdgzh" (OuterVolumeSpecName: "kube-api-access-kdgzh") pod "fac20de4-3e4b-4934-b153-aff181b435de" (UID: "fac20de4-3e4b-4934-b153-aff181b435de"). InnerVolumeSpecName "kube-api-access-kdgzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:32.518092 master-0 kubenswrapper[28149]: I0313 13:10:32.517872 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/170af500-fab8-49d0-83fb-16fa86431761-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "170af500-fab8-49d0-83fb-16fa86431761" (UID: "170af500-fab8-49d0-83fb-16fa86431761"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:32.532780 master-0 kubenswrapper[28149]: I0313 13:10:32.532725 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b4zxh" Mar 13 13:10:32.599925 master-0 kubenswrapper[28149]: I0313 13:10:32.596431 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2927z\" (UniqueName: \"kubernetes.io/projected/bb821b74-acb1-49cc-8240-9eb3e2626153-kube-api-access-2927z\") pod \"bb821b74-acb1-49cc-8240-9eb3e2626153\" (UID: \"bb821b74-acb1-49cc-8240-9eb3e2626153\") " Mar 13 13:10:32.599925 master-0 kubenswrapper[28149]: I0313 13:10:32.596504 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/459c48e6-39bc-4241-9810-a203d2cde587-operator-scripts\") pod \"459c48e6-39bc-4241-9810-a203d2cde587\" (UID: \"459c48e6-39bc-4241-9810-a203d2cde587\") " Mar 13 13:10:32.599925 master-0 kubenswrapper[28149]: I0313 13:10:32.596615 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfpdp\" (UniqueName: \"kubernetes.io/projected/459c48e6-39bc-4241-9810-a203d2cde587-kube-api-access-bfpdp\") pod \"459c48e6-39bc-4241-9810-a203d2cde587\" (UID: \"459c48e6-39bc-4241-9810-a203d2cde587\") " Mar 13 13:10:32.599925 master-0 kubenswrapper[28149]: I0313 13:10:32.596735 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb821b74-acb1-49cc-8240-9eb3e2626153-operator-scripts\") pod \"bb821b74-acb1-49cc-8240-9eb3e2626153\" (UID: \"bb821b74-acb1-49cc-8240-9eb3e2626153\") " Mar 13 13:10:32.599925 master-0 kubenswrapper[28149]: I0313 13:10:32.597202 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/459c48e6-39bc-4241-9810-a203d2cde587-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "459c48e6-39bc-4241-9810-a203d2cde587" (UID: "459c48e6-39bc-4241-9810-a203d2cde587"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:32.599925 master-0 kubenswrapper[28149]: I0313 13:10:32.597322 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdgzh\" (UniqueName: \"kubernetes.io/projected/fac20de4-3e4b-4934-b153-aff181b435de-kube-api-access-kdgzh\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:32.599925 master-0 kubenswrapper[28149]: I0313 13:10:32.597340 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/459c48e6-39bc-4241-9810-a203d2cde587-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:32.599925 master-0 kubenswrapper[28149]: I0313 13:10:32.597351 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f589\" (UniqueName: \"kubernetes.io/projected/170af500-fab8-49d0-83fb-16fa86431761-kube-api-access-8f589\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:32.599925 master-0 kubenswrapper[28149]: I0313 13:10:32.597360 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac20de4-3e4b-4934-b153-aff181b435de-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:32.599925 master-0 kubenswrapper[28149]: I0313 13:10:32.597369 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170af500-fab8-49d0-83fb-16fa86431761-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:32.599925 master-0 kubenswrapper[28149]: I0313 13:10:32.599837 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb821b74-acb1-49cc-8240-9eb3e2626153-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb821b74-acb1-49cc-8240-9eb3e2626153" (UID: "bb821b74-acb1-49cc-8240-9eb3e2626153"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:32.601253 master-0 kubenswrapper[28149]: I0313 13:10:32.601057 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb821b74-acb1-49cc-8240-9eb3e2626153-kube-api-access-2927z" (OuterVolumeSpecName: "kube-api-access-2927z") pod "bb821b74-acb1-49cc-8240-9eb3e2626153" (UID: "bb821b74-acb1-49cc-8240-9eb3e2626153"). InnerVolumeSpecName "kube-api-access-2927z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:32.603893 master-0 kubenswrapper[28149]: I0313 13:10:32.603301 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/459c48e6-39bc-4241-9810-a203d2cde587-kube-api-access-bfpdp" (OuterVolumeSpecName: "kube-api-access-bfpdp") pod "459c48e6-39bc-4241-9810-a203d2cde587" (UID: "459c48e6-39bc-4241-9810-a203d2cde587"). InnerVolumeSpecName "kube-api-access-bfpdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:32.702387 master-0 kubenswrapper[28149]: I0313 13:10:32.702031 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfpdp\" (UniqueName: \"kubernetes.io/projected/459c48e6-39bc-4241-9810-a203d2cde587-kube-api-access-bfpdp\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:32.709281 master-0 kubenswrapper[28149]: I0313 13:10:32.709219 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb821b74-acb1-49cc-8240-9eb3e2626153-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:32.709851 master-0 kubenswrapper[28149]: I0313 13:10:32.709804 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2927z\" (UniqueName: \"kubernetes.io/projected/bb821b74-acb1-49cc-8240-9eb3e2626153-kube-api-access-2927z\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:32.830164 master-0 kubenswrapper[28149]: I0313 13:10:32.806163 28149 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod220bdc89-22fc-4966-847c-550dad12dd5a"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod220bdc89-22fc-4966-847c-550dad12dd5a] : Timed out while waiting for systemd to remove kubepods-besteffort-pod220bdc89_22fc_4966_847c_550dad12dd5a.slice" Mar 13 13:10:32.830164 master-0 kubenswrapper[28149]: E0313 13:10:32.806234 28149 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod220bdc89-22fc-4966-847c-550dad12dd5a] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod220bdc89-22fc-4966-847c-550dad12dd5a] : Timed out while waiting for systemd to remove kubepods-besteffort-pod220bdc89_22fc_4966_847c_550dad12dd5a.slice" pod="openstack/root-account-create-update-rkdmk" podUID="220bdc89-22fc-4966-847c-550dad12dd5a" Mar 13 13:10:33.058012 master-0 kubenswrapper[28149]: I0313 13:10:33.057952 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-85ac-account-create-update-x79xz" Mar 13 13:10:33.058849 master-0 kubenswrapper[28149]: I0313 13:10:33.058806 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-85ac-account-create-update-x79xz" event={"ID":"fac20de4-3e4b-4934-b153-aff181b435de","Type":"ContainerDied","Data":"48deefb234845d15ed94250a29873e0e9e4db72cce0a68ea0062848c0a900987"} Mar 13 13:10:33.058904 master-0 kubenswrapper[28149]: I0313 13:10:33.058889 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48deefb234845d15ed94250a29873e0e9e4db72cce0a68ea0062848c0a900987" Mar 13 13:10:33.060719 master-0 kubenswrapper[28149]: I0313 13:10:33.060680 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nxvs5" event={"ID":"ab18fda5-1cb5-4875-9daf-045d6e20138e","Type":"ContainerStarted","Data":"845fd737298c1b4a262b364af2c609e931ca21fd2e8bf936603da53562dba37d"} Mar 13 13:10:33.069284 master-0 kubenswrapper[28149]: I0313 13:10:33.069221 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"94bc8e28f4cf3ac952214d7a02a2dd35a3a11aa04a72075a2813b198a3983baa"} Mar 13 13:10:33.071821 master-0 kubenswrapper[28149]: I0313 13:10:33.071778 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1148-account-create-update-ggwcs" Mar 13 13:10:33.072752 master-0 kubenswrapper[28149]: I0313 13:10:33.072579 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1148-account-create-update-ggwcs" event={"ID":"bb821b74-acb1-49cc-8240-9eb3e2626153","Type":"ContainerDied","Data":"33600cd3e5cd71e3c33554cd08cab688cb3943cfd9a3482578269164be428bce"} Mar 13 13:10:33.072752 master-0 kubenswrapper[28149]: I0313 13:10:33.072686 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33600cd3e5cd71e3c33554cd08cab688cb3943cfd9a3482578269164be428bce" Mar 13 13:10:33.075201 master-0 kubenswrapper[28149]: I0313 13:10:33.075155 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b4zxh" event={"ID":"459c48e6-39bc-4241-9810-a203d2cde587","Type":"ContainerDied","Data":"da326dd4b0137575fd87bafd41b700048d2ae049480c5416ffc34cbda4ede00c"} Mar 13 13:10:33.075292 master-0 kubenswrapper[28149]: I0313 13:10:33.075218 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da326dd4b0137575fd87bafd41b700048d2ae049480c5416ffc34cbda4ede00c" Mar 13 13:10:33.075292 master-0 kubenswrapper[28149]: I0313 13:10:33.075228 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b4zxh" Mar 13 13:10:33.087647 master-0 kubenswrapper[28149]: I0313 13:10:33.084788 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rkdmk" Mar 13 13:10:33.087647 master-0 kubenswrapper[28149]: I0313 13:10:33.084845 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h89wz" Mar 13 13:10:33.087647 master-0 kubenswrapper[28149]: I0313 13:10:33.084865 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h89wz" event={"ID":"170af500-fab8-49d0-83fb-16fa86431761","Type":"ContainerDied","Data":"fd780b6bb2cde3be7e5886b76b5bbe06435fdf74173164f6e7530802caeae622"} Mar 13 13:10:33.087647 master-0 kubenswrapper[28149]: I0313 13:10:33.084892 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd780b6bb2cde3be7e5886b76b5bbe06435fdf74173164f6e7530802caeae622" Mar 13 13:10:33.099964 master-0 kubenswrapper[28149]: I0313 13:10:33.099857 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-nxvs5" podStartSLOduration=9.414534175 podStartE2EDuration="15.099810172s" podCreationTimestamp="2026-03-13 13:10:18 +0000 UTC" firstStartedPulling="2026-03-13 13:10:26.551748305 +0000 UTC m=+1000.205213464" lastFinishedPulling="2026-03-13 13:10:32.237024302 +0000 UTC m=+1005.890489461" observedRunningTime="2026-03-13 13:10:33.081803839 +0000 UTC m=+1006.735269028" watchObservedRunningTime="2026-03-13 13:10:33.099810172 +0000 UTC m=+1006.753275331" Mar 13 13:10:36.207189 master-0 kubenswrapper[28149]: I0313 13:10:36.206464 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"69b6f54fa3dde43aa90f6f209a2b9cba720596f22e532de6669fc3ae6e8bd0c3"} Mar 13 13:10:36.207189 master-0 kubenswrapper[28149]: I0313 13:10:36.207185 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"d75a72ba9eb1d912d59f2014977616843ce56faf0c3a7e0092c97b76b07c8b8d"} Mar 13 13:10:37.233696 master-0 kubenswrapper[28149]: I0313 13:10:37.233066 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"1bb94c14c53801dd3986047cb368b30b6a485e51ef4bf91cf6d8baa6180087e5"} Mar 13 13:10:37.233696 master-0 kubenswrapper[28149]: I0313 13:10:37.233169 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"052331b56259f5517e69fa2edd10c99cd3c681b65b102ffe25347a754d251398"} Mar 13 13:10:37.233696 master-0 kubenswrapper[28149]: I0313 13:10:37.233184 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"75b71c3f7bf016834be03b8760923040a7d862a8f489ba02124f3d1c806e24d6"} Mar 13 13:10:38.250006 master-0 kubenswrapper[28149]: I0313 13:10:38.249947 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"af9069d3bc3c46d52abfd0f9fc45ef0a07d6c5d82102967ee064e6b3e6536753"} Mar 13 13:10:38.250006 master-0 kubenswrapper[28149]: I0313 13:10:38.249998 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0e1ffcf0-0cdc-4a69-884c-47edbe0caf50","Type":"ContainerStarted","Data":"2f5146b28e7d4284c3850bd0da5ecc9271d7d9f9733f1f407f19377c39d57bba"} Mar 13 13:10:39.138672 master-0 kubenswrapper[28149]: I0313 13:10:39.138589 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=39.934882499 podStartE2EDuration="1m2.13855981s" podCreationTimestamp="2026-03-13 13:09:37 +0000 UTC" firstStartedPulling="2026-03-13 13:10:13.563566831 +0000 UTC m=+987.217031990" lastFinishedPulling="2026-03-13 13:10:35.767244142 +0000 UTC m=+1009.420709301" observedRunningTime="2026-03-13 13:10:38.300129295 +0000 UTC m=+1011.953594454" watchObservedRunningTime="2026-03-13 13:10:39.13855981 +0000 UTC m=+1012.792024969" Mar 13 13:10:39.139862 master-0 kubenswrapper[28149]: I0313 13:10:39.139832 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bd79cd5f-hd84t"] Mar 13 13:10:39.140372 master-0 kubenswrapper[28149]: E0313 13:10:39.140347 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="459c48e6-39bc-4241-9810-a203d2cde587" containerName="mariadb-database-create" Mar 13 13:10:39.140372 master-0 kubenswrapper[28149]: I0313 13:10:39.140371 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="459c48e6-39bc-4241-9810-a203d2cde587" containerName="mariadb-database-create" Mar 13 13:10:39.140490 master-0 kubenswrapper[28149]: E0313 13:10:39.140392 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb821b74-acb1-49cc-8240-9eb3e2626153" containerName="mariadb-account-create-update" Mar 13 13:10:39.140490 master-0 kubenswrapper[28149]: I0313 13:10:39.140400 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb821b74-acb1-49cc-8240-9eb3e2626153" containerName="mariadb-account-create-update" Mar 13 13:10:39.140490 master-0 kubenswrapper[28149]: E0313 13:10:39.140420 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170af500-fab8-49d0-83fb-16fa86431761" containerName="mariadb-database-create" Mar 13 13:10:39.140490 master-0 kubenswrapper[28149]: I0313 13:10:39.140426 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="170af500-fab8-49d0-83fb-16fa86431761" containerName="mariadb-database-create" Mar 13 13:10:39.140490 master-0 kubenswrapper[28149]: E0313 13:10:39.140434 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac20de4-3e4b-4934-b153-aff181b435de" containerName="mariadb-account-create-update" Mar 13 13:10:39.140490 master-0 kubenswrapper[28149]: I0313 13:10:39.140440 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac20de4-3e4b-4934-b153-aff181b435de" containerName="mariadb-account-create-update" Mar 13 13:10:39.140686 master-0 kubenswrapper[28149]: I0313 13:10:39.140664 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac20de4-3e4b-4934-b153-aff181b435de" containerName="mariadb-account-create-update" Mar 13 13:10:39.140727 master-0 kubenswrapper[28149]: I0313 13:10:39.140692 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="459c48e6-39bc-4241-9810-a203d2cde587" containerName="mariadb-database-create" Mar 13 13:10:39.140727 master-0 kubenswrapper[28149]: I0313 13:10:39.140717 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb821b74-acb1-49cc-8240-9eb3e2626153" containerName="mariadb-account-create-update" Mar 13 13:10:39.140816 master-0 kubenswrapper[28149]: I0313 13:10:39.140729 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="170af500-fab8-49d0-83fb-16fa86431761" containerName="mariadb-database-create" Mar 13 13:10:39.141909 master-0 kubenswrapper[28149]: I0313 13:10:39.141882 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.153933 master-0 kubenswrapper[28149]: I0313 13:10:39.153885 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 13 13:10:39.169527 master-0 kubenswrapper[28149]: I0313 13:10:39.169467 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bd79cd5f-hd84t"] Mar 13 13:10:39.205615 master-0 kubenswrapper[28149]: I0313 13:10:39.205558 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-ovsdbserver-sb\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.205844 master-0 kubenswrapper[28149]: I0313 13:10:39.205632 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-config\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.205844 master-0 kubenswrapper[28149]: I0313 13:10:39.205714 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-dns-swift-storage-0\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.205844 master-0 kubenswrapper[28149]: I0313 13:10:39.205773 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-dns-svc\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.205844 master-0 kubenswrapper[28149]: I0313 13:10:39.205805 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb4fd\" (UniqueName: \"kubernetes.io/projected/74332268-d102-4a07-9298-c7cf2005cee5-kube-api-access-hb4fd\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.205989 master-0 kubenswrapper[28149]: I0313 13:10:39.205882 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-ovsdbserver-nb\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.276223 master-0 kubenswrapper[28149]: I0313 13:10:39.275813 28149 generic.go:334] "Generic (PLEG): container finished" podID="ab18fda5-1cb5-4875-9daf-045d6e20138e" containerID="845fd737298c1b4a262b364af2c609e931ca21fd2e8bf936603da53562dba37d" exitCode=0 Mar 13 13:10:39.276743 master-0 kubenswrapper[28149]: I0313 13:10:39.276687 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nxvs5" event={"ID":"ab18fda5-1cb5-4875-9daf-045d6e20138e","Type":"ContainerDied","Data":"845fd737298c1b4a262b364af2c609e931ca21fd2e8bf936603da53562dba37d"} Mar 13 13:10:39.308442 master-0 kubenswrapper[28149]: I0313 13:10:39.308368 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-config\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.308707 master-0 kubenswrapper[28149]: I0313 13:10:39.308503 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-dns-swift-storage-0\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.308707 master-0 kubenswrapper[28149]: I0313 13:10:39.308577 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-dns-svc\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.308707 master-0 kubenswrapper[28149]: I0313 13:10:39.308622 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb4fd\" (UniqueName: \"kubernetes.io/projected/74332268-d102-4a07-9298-c7cf2005cee5-kube-api-access-hb4fd\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.308912 master-0 kubenswrapper[28149]: I0313 13:10:39.308713 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-ovsdbserver-nb\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.308912 master-0 kubenswrapper[28149]: I0313 13:10:39.308779 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-ovsdbserver-sb\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.309766 master-0 kubenswrapper[28149]: I0313 13:10:39.309724 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-ovsdbserver-sb\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.313161 master-0 kubenswrapper[28149]: I0313 13:10:39.310173 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-dns-svc\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.313161 master-0 kubenswrapper[28149]: I0313 13:10:39.310441 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-ovsdbserver-nb\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.313161 master-0 kubenswrapper[28149]: I0313 13:10:39.310727 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-config\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.313161 master-0 kubenswrapper[28149]: I0313 13:10:39.310940 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-dns-swift-storage-0\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.331341 master-0 kubenswrapper[28149]: I0313 13:10:39.331284 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb4fd\" (UniqueName: \"kubernetes.io/projected/74332268-d102-4a07-9298-c7cf2005cee5-kube-api-access-hb4fd\") pod \"dnsmasq-dns-75bd79cd5f-hd84t\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:39.458916 master-0 kubenswrapper[28149]: I0313 13:10:39.458739 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:40.137344 master-0 kubenswrapper[28149]: I0313 13:10:40.136167 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bd79cd5f-hd84t"] Mar 13 13:10:40.459327 master-0 kubenswrapper[28149]: I0313 13:10:40.459257 28149 generic.go:334] "Generic (PLEG): container finished" podID="e2b3d9c8-1d1e-425b-9780-d9ad7b26318a" containerID="16f54102812c62c2c278b6d4858912a7d8f697f03bf80a216128eaf4a1224da2" exitCode=0 Mar 13 13:10:40.459837 master-0 kubenswrapper[28149]: I0313 13:10:40.459344 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tfhs6" event={"ID":"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a","Type":"ContainerDied","Data":"16f54102812c62c2c278b6d4858912a7d8f697f03bf80a216128eaf4a1224da2"} Mar 13 13:10:40.464266 master-0 kubenswrapper[28149]: I0313 13:10:40.461376 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" event={"ID":"74332268-d102-4a07-9298-c7cf2005cee5","Type":"ContainerStarted","Data":"dc181376a11d64845e2eca4afeb46c888c29c2cbd8c0d2ba61c30317ad96c8ef"} Mar 13 13:10:41.064337 master-0 kubenswrapper[28149]: I0313 13:10:41.062192 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nxvs5" Mar 13 13:10:41.399632 master-0 kubenswrapper[28149]: I0313 13:10:41.398961 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab18fda5-1cb5-4875-9daf-045d6e20138e-config-data\") pod \"ab18fda5-1cb5-4875-9daf-045d6e20138e\" (UID: \"ab18fda5-1cb5-4875-9daf-045d6e20138e\") " Mar 13 13:10:41.399632 master-0 kubenswrapper[28149]: I0313 13:10:41.399183 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab18fda5-1cb5-4875-9daf-045d6e20138e-combined-ca-bundle\") pod \"ab18fda5-1cb5-4875-9daf-045d6e20138e\" (UID: \"ab18fda5-1cb5-4875-9daf-045d6e20138e\") " Mar 13 13:10:41.399632 master-0 kubenswrapper[28149]: I0313 13:10:41.399267 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tqvc\" (UniqueName: \"kubernetes.io/projected/ab18fda5-1cb5-4875-9daf-045d6e20138e-kube-api-access-7tqvc\") pod \"ab18fda5-1cb5-4875-9daf-045d6e20138e\" (UID: \"ab18fda5-1cb5-4875-9daf-045d6e20138e\") " Mar 13 13:10:41.412165 master-0 kubenswrapper[28149]: I0313 13:10:41.411683 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab18fda5-1cb5-4875-9daf-045d6e20138e-kube-api-access-7tqvc" (OuterVolumeSpecName: "kube-api-access-7tqvc") pod "ab18fda5-1cb5-4875-9daf-045d6e20138e" (UID: "ab18fda5-1cb5-4875-9daf-045d6e20138e"). InnerVolumeSpecName "kube-api-access-7tqvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:41.430165 master-0 kubenswrapper[28149]: I0313 13:10:41.427473 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab18fda5-1cb5-4875-9daf-045d6e20138e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab18fda5-1cb5-4875-9daf-045d6e20138e" (UID: "ab18fda5-1cb5-4875-9daf-045d6e20138e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:41.461893 master-0 kubenswrapper[28149]: I0313 13:10:41.461503 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab18fda5-1cb5-4875-9daf-045d6e20138e-config-data" (OuterVolumeSpecName: "config-data") pod "ab18fda5-1cb5-4875-9daf-045d6e20138e" (UID: "ab18fda5-1cb5-4875-9daf-045d6e20138e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:41.487326 master-0 kubenswrapper[28149]: I0313 13:10:41.487262 28149 generic.go:334] "Generic (PLEG): container finished" podID="74332268-d102-4a07-9298-c7cf2005cee5" containerID="5d6fd29691d2926caa8269a5f26e9a1fe1d25ad6d7c3b09a7170af10352aff21" exitCode=0 Mar 13 13:10:41.487579 master-0 kubenswrapper[28149]: I0313 13:10:41.487343 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" event={"ID":"74332268-d102-4a07-9298-c7cf2005cee5","Type":"ContainerDied","Data":"5d6fd29691d2926caa8269a5f26e9a1fe1d25ad6d7c3b09a7170af10352aff21"} Mar 13 13:10:41.494748 master-0 kubenswrapper[28149]: I0313 13:10:41.490598 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nxvs5" Mar 13 13:10:41.494748 master-0 kubenswrapper[28149]: I0313 13:10:41.490747 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nxvs5" event={"ID":"ab18fda5-1cb5-4875-9daf-045d6e20138e","Type":"ContainerDied","Data":"34f9234e67ab3d74ca60cd356760ae285c4f51fd1b6ca2929bd13015027697fb"} Mar 13 13:10:41.494748 master-0 kubenswrapper[28149]: I0313 13:10:41.490789 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34f9234e67ab3d74ca60cd356760ae285c4f51fd1b6ca2929bd13015027697fb" Mar 13 13:10:41.505419 master-0 kubenswrapper[28149]: I0313 13:10:41.504561 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab18fda5-1cb5-4875-9daf-045d6e20138e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:41.505419 master-0 kubenswrapper[28149]: I0313 13:10:41.504603 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tqvc\" (UniqueName: \"kubernetes.io/projected/ab18fda5-1cb5-4875-9daf-045d6e20138e-kube-api-access-7tqvc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:41.505419 master-0 kubenswrapper[28149]: I0313 13:10:41.504616 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab18fda5-1cb5-4875-9daf-045d6e20138e-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:41.745293 master-0 kubenswrapper[28149]: I0313 13:10:41.740797 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jpjd2"] Mar 13 13:10:41.745293 master-0 kubenswrapper[28149]: E0313 13:10:41.742472 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab18fda5-1cb5-4875-9daf-045d6e20138e" containerName="keystone-db-sync" Mar 13 13:10:41.745293 master-0 kubenswrapper[28149]: I0313 13:10:41.742574 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab18fda5-1cb5-4875-9daf-045d6e20138e" containerName="keystone-db-sync" Mar 13 13:10:41.745293 master-0 kubenswrapper[28149]: I0313 13:10:41.744762 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab18fda5-1cb5-4875-9daf-045d6e20138e" containerName="keystone-db-sync" Mar 13 13:10:41.754087 master-0 kubenswrapper[28149]: I0313 13:10:41.746112 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.754087 master-0 kubenswrapper[28149]: I0313 13:10:41.751929 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 13:10:41.756627 master-0 kubenswrapper[28149]: I0313 13:10:41.755562 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 13 13:10:41.756627 master-0 kubenswrapper[28149]: I0313 13:10:41.755915 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 13:10:41.756627 master-0 kubenswrapper[28149]: I0313 13:10:41.755540 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 13:10:41.802163 master-0 kubenswrapper[28149]: I0313 13:10:41.798619 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jpjd2"] Mar 13 13:10:41.829290 master-0 kubenswrapper[28149]: I0313 13:10:41.827645 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-credential-keys\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.829290 master-0 kubenswrapper[28149]: I0313 13:10:41.827748 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-fernet-keys\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.829290 master-0 kubenswrapper[28149]: I0313 13:10:41.827792 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-scripts\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.829290 master-0 kubenswrapper[28149]: I0313 13:10:41.827952 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lph7g\" (UniqueName: \"kubernetes.io/projected/736c6577-449b-4b8d-8bfa-3dfbcc259e94-kube-api-access-lph7g\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.829290 master-0 kubenswrapper[28149]: I0313 13:10:41.828003 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-config-data\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.829290 master-0 kubenswrapper[28149]: I0313 13:10:41.828041 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-combined-ca-bundle\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.925795 master-0 kubenswrapper[28149]: I0313 13:10:41.925474 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bd79cd5f-hd84t"] Mar 13 13:10:41.931178 master-0 kubenswrapper[28149]: I0313 13:10:41.931086 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lph7g\" (UniqueName: \"kubernetes.io/projected/736c6577-449b-4b8d-8bfa-3dfbcc259e94-kube-api-access-lph7g\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.931178 master-0 kubenswrapper[28149]: I0313 13:10:41.931173 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-config-data\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.931436 master-0 kubenswrapper[28149]: I0313 13:10:41.931205 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-combined-ca-bundle\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.931436 master-0 kubenswrapper[28149]: I0313 13:10:41.931326 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-credential-keys\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.931684 master-0 kubenswrapper[28149]: I0313 13:10:41.931642 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-fernet-keys\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.931761 master-0 kubenswrapper[28149]: I0313 13:10:41.931725 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-scripts\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.937318 master-0 kubenswrapper[28149]: I0313 13:10:41.935527 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-fernet-keys\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.937318 master-0 kubenswrapper[28149]: I0313 13:10:41.936395 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-config-data\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.937318 master-0 kubenswrapper[28149]: I0313 13:10:41.936652 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-scripts\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.937740 master-0 kubenswrapper[28149]: I0313 13:10:41.937723 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-combined-ca-bundle\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.944563 master-0 kubenswrapper[28149]: I0313 13:10:41.941177 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-credential-keys\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.965561 master-0 kubenswrapper[28149]: I0313 13:10:41.965125 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lph7g\" (UniqueName: \"kubernetes.io/projected/736c6577-449b-4b8d-8bfa-3dfbcc259e94-kube-api-access-lph7g\") pod \"keystone-bootstrap-jpjd2\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:41.965561 master-0 kubenswrapper[28149]: I0313 13:10:41.965254 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c86f65b7c-hdnsw"] Mar 13 13:10:42.038602 master-0 kubenswrapper[28149]: I0313 13:10:42.037860 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c86f65b7c-hdnsw"] Mar 13 13:10:42.038602 master-0 kubenswrapper[28149]: I0313 13:10:42.037991 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.046581 master-0 kubenswrapper[28149]: I0313 13:10:42.043940 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ee0a2-db-sync-tv65l"] Mar 13 13:10:42.046581 master-0 kubenswrapper[28149]: I0313 13:10:42.045609 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.051400 master-0 kubenswrapper[28149]: I0313 13:10:42.051377 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ee0a2-config-data" Mar 13 13:10:42.051746 master-0 kubenswrapper[28149]: I0313 13:10:42.051732 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ee0a2-scripts" Mar 13 13:10:42.073020 master-0 kubenswrapper[28149]: I0313 13:10:42.069330 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-f5dvd"] Mar 13 13:10:42.316157 master-0 kubenswrapper[28149]: I0313 13:10:42.315287 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:42.317080 master-0 kubenswrapper[28149]: I0313 13:10:42.316851 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-f5dvd" Mar 13 13:10:42.334738 master-0 kubenswrapper[28149]: I0313 13:10:42.334670 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-scripts\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.334738 master-0 kubenswrapper[28149]: I0313 13:10:42.334734 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvlpq\" (UniqueName: \"kubernetes.io/projected/2134b334-b48d-46c6-91b6-a824c323d789-kube-api-access-bvlpq\") pod \"ironic-db-create-f5dvd\" (UID: \"2134b334-b48d-46c6-91b6-a824c323d789\") " pod="openstack/ironic-db-create-f5dvd" Mar 13 13:10:42.335068 master-0 kubenswrapper[28149]: I0313 13:10:42.334762 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2134b334-b48d-46c6-91b6-a824c323d789-operator-scripts\") pod \"ironic-db-create-f5dvd\" (UID: \"2134b334-b48d-46c6-91b6-a824c323d789\") " pod="openstack/ironic-db-create-f5dvd" Mar 13 13:10:42.335068 master-0 kubenswrapper[28149]: I0313 13:10:42.334887 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-config\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.335068 master-0 kubenswrapper[28149]: I0313 13:10:42.334911 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfdmr\" (UniqueName: \"kubernetes.io/projected/76e4472a-9fe8-452e-924c-7a88df7c1f7d-kube-api-access-pfdmr\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.335068 master-0 kubenswrapper[28149]: I0313 13:10:42.334940 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-config-data\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.335301 master-0 kubenswrapper[28149]: I0313 13:10:42.335052 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95eb9b96-2f27-4701-b62d-b7026cb009ec-etc-machine-id\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.335301 master-0 kubenswrapper[28149]: I0313 13:10:42.335114 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n54kt\" (UniqueName: \"kubernetes.io/projected/95eb9b96-2f27-4701-b62d-b7026cb009ec-kube-api-access-n54kt\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.335301 master-0 kubenswrapper[28149]: I0313 13:10:42.335162 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-ovsdbserver-nb\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.335435 master-0 kubenswrapper[28149]: I0313 13:10:42.335312 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-dns-svc\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.335435 master-0 kubenswrapper[28149]: I0313 13:10:42.335344 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-db-sync-config-data\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.335435 master-0 kubenswrapper[28149]: I0313 13:10:42.335377 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-dns-swift-storage-0\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.335578 master-0 kubenswrapper[28149]: I0313 13:10:42.335431 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-combined-ca-bundle\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.335578 master-0 kubenswrapper[28149]: I0313 13:10:42.335510 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-ovsdbserver-sb\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.344247 master-0 kubenswrapper[28149]: I0313 13:10:42.344203 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-f5dvd"] Mar 13 13:10:42.366490 master-0 kubenswrapper[28149]: I0313 13:10:42.364906 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-db-sync-tv65l"] Mar 13 13:10:42.388599 master-0 kubenswrapper[28149]: I0313 13:10:42.387356 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-kdkvp"] Mar 13 13:10:42.390983 master-0 kubenswrapper[28149]: I0313 13:10:42.389688 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kdkvp" Mar 13 13:10:42.394150 master-0 kubenswrapper[28149]: I0313 13:10:42.393350 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 13 13:10:42.394150 master-0 kubenswrapper[28149]: I0313 13:10:42.393846 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 13 13:10:42.440606 master-0 kubenswrapper[28149]: I0313 13:10:42.440540 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n54kt\" (UniqueName: \"kubernetes.io/projected/95eb9b96-2f27-4701-b62d-b7026cb009ec-kube-api-access-n54kt\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.443121 master-0 kubenswrapper[28149]: I0313 13:10:42.443077 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-ovsdbserver-nb\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.449092 master-0 kubenswrapper[28149]: I0313 13:10:42.445567 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-ovsdbserver-nb\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.449092 master-0 kubenswrapper[28149]: I0313 13:10:42.446519 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-dns-svc\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.449092 master-0 kubenswrapper[28149]: I0313 13:10:42.446573 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-db-sync-config-data\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.449092 master-0 kubenswrapper[28149]: I0313 13:10:42.446627 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-dns-swift-storage-0\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.449092 master-0 kubenswrapper[28149]: I0313 13:10:42.446714 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-combined-ca-bundle\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.449092 master-0 kubenswrapper[28149]: I0313 13:10:42.448102 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-ovsdbserver-sb\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.449092 master-0 kubenswrapper[28149]: I0313 13:10:42.448228 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-scripts\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.449092 master-0 kubenswrapper[28149]: I0313 13:10:42.448253 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvlpq\" (UniqueName: \"kubernetes.io/projected/2134b334-b48d-46c6-91b6-a824c323d789-kube-api-access-bvlpq\") pod \"ironic-db-create-f5dvd\" (UID: \"2134b334-b48d-46c6-91b6-a824c323d789\") " pod="openstack/ironic-db-create-f5dvd" Mar 13 13:10:42.449092 master-0 kubenswrapper[28149]: I0313 13:10:42.448291 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2134b334-b48d-46c6-91b6-a824c323d789-operator-scripts\") pod \"ironic-db-create-f5dvd\" (UID: \"2134b334-b48d-46c6-91b6-a824c323d789\") " pod="openstack/ironic-db-create-f5dvd" Mar 13 13:10:42.450438 master-0 kubenswrapper[28149]: I0313 13:10:42.448694 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-config\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.450438 master-0 kubenswrapper[28149]: I0313 13:10:42.450296 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-config\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.450438 master-0 kubenswrapper[28149]: I0313 13:10:42.450300 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfdmr\" (UniqueName: \"kubernetes.io/projected/76e4472a-9fe8-452e-924c-7a88df7c1f7d-kube-api-access-pfdmr\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.450438 master-0 kubenswrapper[28149]: I0313 13:10:42.450412 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-config-data\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.450860 master-0 kubenswrapper[28149]: I0313 13:10:42.450530 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95eb9b96-2f27-4701-b62d-b7026cb009ec-etc-machine-id\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.450860 master-0 kubenswrapper[28149]: I0313 13:10:42.450748 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95eb9b96-2f27-4701-b62d-b7026cb009ec-etc-machine-id\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.451871 master-0 kubenswrapper[28149]: I0313 13:10:42.451843 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-dns-svc\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.452132 master-0 kubenswrapper[28149]: I0313 13:10:42.452083 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-db-sync-config-data\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.452911 master-0 kubenswrapper[28149]: I0313 13:10:42.452888 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-ovsdbserver-sb\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.453299 master-0 kubenswrapper[28149]: I0313 13:10:42.453255 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2134b334-b48d-46c6-91b6-a824c323d789-operator-scripts\") pod \"ironic-db-create-f5dvd\" (UID: \"2134b334-b48d-46c6-91b6-a824c323d789\") " pod="openstack/ironic-db-create-f5dvd" Mar 13 13:10:42.454600 master-0 kubenswrapper[28149]: I0313 13:10:42.454560 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-combined-ca-bundle\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.454677 master-0 kubenswrapper[28149]: I0313 13:10:42.454599 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-dns-swift-storage-0\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.455006 master-0 kubenswrapper[28149]: I0313 13:10:42.454968 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-config-data\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.455817 master-0 kubenswrapper[28149]: I0313 13:10:42.455237 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-scripts\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.797399 master-0 kubenswrapper[28149]: I0313 13:10:42.797079 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4483640-14d3-42de-bad4-48fe97f66cad-config\") pod \"neutron-db-sync-kdkvp\" (UID: \"c4483640-14d3-42de-bad4-48fe97f66cad\") " pod="openstack/neutron-db-sync-kdkvp" Mar 13 13:10:42.797399 master-0 kubenswrapper[28149]: I0313 13:10:42.797347 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4483640-14d3-42de-bad4-48fe97f66cad-combined-ca-bundle\") pod \"neutron-db-sync-kdkvp\" (UID: \"c4483640-14d3-42de-bad4-48fe97f66cad\") " pod="openstack/neutron-db-sync-kdkvp" Mar 13 13:10:42.798347 master-0 kubenswrapper[28149]: I0313 13:10:42.797578 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfq2h\" (UniqueName: \"kubernetes.io/projected/c4483640-14d3-42de-bad4-48fe97f66cad-kube-api-access-jfq2h\") pod \"neutron-db-sync-kdkvp\" (UID: \"c4483640-14d3-42de-bad4-48fe97f66cad\") " pod="openstack/neutron-db-sync-kdkvp" Mar 13 13:10:42.945508 master-0 kubenswrapper[28149]: I0313 13:10:42.945382 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4483640-14d3-42de-bad4-48fe97f66cad-config\") pod \"neutron-db-sync-kdkvp\" (UID: \"c4483640-14d3-42de-bad4-48fe97f66cad\") " pod="openstack/neutron-db-sync-kdkvp" Mar 13 13:10:42.945508 master-0 kubenswrapper[28149]: I0313 13:10:42.945445 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4483640-14d3-42de-bad4-48fe97f66cad-combined-ca-bundle\") pod \"neutron-db-sync-kdkvp\" (UID: \"c4483640-14d3-42de-bad4-48fe97f66cad\") " pod="openstack/neutron-db-sync-kdkvp" Mar 13 13:10:42.945758 master-0 kubenswrapper[28149]: I0313 13:10:42.945575 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfq2h\" (UniqueName: \"kubernetes.io/projected/c4483640-14d3-42de-bad4-48fe97f66cad-kube-api-access-jfq2h\") pod \"neutron-db-sync-kdkvp\" (UID: \"c4483640-14d3-42de-bad4-48fe97f66cad\") " pod="openstack/neutron-db-sync-kdkvp" Mar 13 13:10:42.948863 master-0 kubenswrapper[28149]: I0313 13:10:42.947856 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvlpq\" (UniqueName: \"kubernetes.io/projected/2134b334-b48d-46c6-91b6-a824c323d789-kube-api-access-bvlpq\") pod \"ironic-db-create-f5dvd\" (UID: \"2134b334-b48d-46c6-91b6-a824c323d789\") " pod="openstack/ironic-db-create-f5dvd" Mar 13 13:10:42.952183 master-0 kubenswrapper[28149]: I0313 13:10:42.952109 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n54kt\" (UniqueName: \"kubernetes.io/projected/95eb9b96-2f27-4701-b62d-b7026cb009ec-kube-api-access-n54kt\") pod \"cinder-ee0a2-db-sync-tv65l\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:42.952879 master-0 kubenswrapper[28149]: I0313 13:10:42.952804 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:42.954315 master-0 kubenswrapper[28149]: I0313 13:10:42.954269 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4483640-14d3-42de-bad4-48fe97f66cad-combined-ca-bundle\") pod \"neutron-db-sync-kdkvp\" (UID: \"c4483640-14d3-42de-bad4-48fe97f66cad\") " pod="openstack/neutron-db-sync-kdkvp" Mar 13 13:10:42.969660 master-0 kubenswrapper[28149]: I0313 13:10:42.964244 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kdkvp"] Mar 13 13:10:42.969660 master-0 kubenswrapper[28149]: I0313 13:10:42.969224 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfdmr\" (UniqueName: \"kubernetes.io/projected/76e4472a-9fe8-452e-924c-7a88df7c1f7d-kube-api-access-pfdmr\") pod \"dnsmasq-dns-7c86f65b7c-hdnsw\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.977539 master-0 kubenswrapper[28149]: I0313 13:10:42.977111 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:42.998979 master-0 kubenswrapper[28149]: I0313 13:10:42.998837 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4483640-14d3-42de-bad4-48fe97f66cad-config\") pod \"neutron-db-sync-kdkvp\" (UID: \"c4483640-14d3-42de-bad4-48fe97f66cad\") " pod="openstack/neutron-db-sync-kdkvp" Mar 13 13:10:42.998979 master-0 kubenswrapper[28149]: I0313 13:10:42.998864 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:10:43.002510 master-0 kubenswrapper[28149]: I0313 13:10:43.002458 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-c179-account-create-update-q66nk"] Mar 13 13:10:43.003016 master-0 kubenswrapper[28149]: E0313 13:10:43.002999 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b3d9c8-1d1e-425b-9780-d9ad7b26318a" containerName="glance-db-sync" Mar 13 13:10:43.003080 master-0 kubenswrapper[28149]: I0313 13:10:43.003017 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b3d9c8-1d1e-425b-9780-d9ad7b26318a" containerName="glance-db-sync" Mar 13 13:10:43.003291 master-0 kubenswrapper[28149]: I0313 13:10:43.003267 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2b3d9c8-1d1e-425b-9780-d9ad7b26318a" containerName="glance-db-sync" Mar 13 13:10:43.004339 master-0 kubenswrapper[28149]: I0313 13:10:43.004320 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c179-account-create-update-q66nk" Mar 13 13:10:43.019294 master-0 kubenswrapper[28149]: I0313 13:10:43.010971 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfq2h\" (UniqueName: \"kubernetes.io/projected/c4483640-14d3-42de-bad4-48fe97f66cad-kube-api-access-jfq2h\") pod \"neutron-db-sync-kdkvp\" (UID: \"c4483640-14d3-42de-bad4-48fe97f66cad\") " pod="openstack/neutron-db-sync-kdkvp" Mar 13 13:10:43.019294 master-0 kubenswrapper[28149]: I0313 13:10:43.012894 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Mar 13 13:10:43.019294 master-0 kubenswrapper[28149]: I0313 13:10:43.018678 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jpjd2"] Mar 13 13:10:43.034652 master-0 kubenswrapper[28149]: I0313 13:10:43.030845 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-c179-account-create-update-q66nk"] Mar 13 13:10:43.296687 master-0 kubenswrapper[28149]: I0313 13:10:43.296628 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kdkvp" Mar 13 13:10:43.297260 master-0 kubenswrapper[28149]: I0313 13:10:43.296912 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qwm6\" (UniqueName: \"kubernetes.io/projected/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-kube-api-access-6qwm6\") pod \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " Mar 13 13:10:43.297260 master-0 kubenswrapper[28149]: I0313 13:10:43.297060 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-config-data\") pod \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " Mar 13 13:10:43.297260 master-0 kubenswrapper[28149]: I0313 13:10:43.297097 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-combined-ca-bundle\") pod \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " Mar 13 13:10:43.297260 master-0 kubenswrapper[28149]: I0313 13:10:43.297236 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-db-sync-config-data\") pod \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\" (UID: \"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a\") " Mar 13 13:10:43.299203 master-0 kubenswrapper[28149]: I0313 13:10:43.298405 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-f5dvd" Mar 13 13:10:43.329305 master-0 kubenswrapper[28149]: I0313 13:10:43.325735 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-kube-api-access-6qwm6" (OuterVolumeSpecName: "kube-api-access-6qwm6") pod "e2b3d9c8-1d1e-425b-9780-d9ad7b26318a" (UID: "e2b3d9c8-1d1e-425b-9780-d9ad7b26318a"). InnerVolumeSpecName "kube-api-access-6qwm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:43.333622 master-0 kubenswrapper[28149]: I0313 13:10:43.333444 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e2b3d9c8-1d1e-425b-9780-d9ad7b26318a" (UID: "e2b3d9c8-1d1e-425b-9780-d9ad7b26318a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:43.420105 master-0 kubenswrapper[28149]: I0313 13:10:43.401630 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a77bade-cffa-4d3e-998b-b60a1cabf4f7-operator-scripts\") pod \"ironic-c179-account-create-update-q66nk\" (UID: \"8a77bade-cffa-4d3e-998b-b60a1cabf4f7\") " pod="openstack/ironic-c179-account-create-update-q66nk" Mar 13 13:10:43.420105 master-0 kubenswrapper[28149]: I0313 13:10:43.401961 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkmp2\" (UniqueName: \"kubernetes.io/projected/8a77bade-cffa-4d3e-998b-b60a1cabf4f7-kube-api-access-xkmp2\") pod \"ironic-c179-account-create-update-q66nk\" (UID: \"8a77bade-cffa-4d3e-998b-b60a1cabf4f7\") " pod="openstack/ironic-c179-account-create-update-q66nk" Mar 13 13:10:43.420105 master-0 kubenswrapper[28149]: I0313 13:10:43.402211 28149 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:43.420105 master-0 kubenswrapper[28149]: I0313 13:10:43.402298 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qwm6\" (UniqueName: \"kubernetes.io/projected/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-kube-api-access-6qwm6\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:43.452063 master-0 kubenswrapper[28149]: I0313 13:10:43.447320 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2b3d9c8-1d1e-425b-9780-d9ad7b26318a" (UID: "e2b3d9c8-1d1e-425b-9780-d9ad7b26318a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:43.452063 master-0 kubenswrapper[28149]: I0313 13:10:43.449726 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-wtgql"] Mar 13 13:10:43.452063 master-0 kubenswrapper[28149]: I0313 13:10:43.451146 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:43.462797 master-0 kubenswrapper[28149]: I0313 13:10:43.454318 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 13 13:10:43.462797 master-0 kubenswrapper[28149]: I0313 13:10:43.454624 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 13 13:10:43.462797 master-0 kubenswrapper[28149]: I0313 13:10:43.462731 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wtgql"] Mar 13 13:10:43.540484 master-0 kubenswrapper[28149]: I0313 13:10:43.540416 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a77bade-cffa-4d3e-998b-b60a1cabf4f7-operator-scripts\") pod \"ironic-c179-account-create-update-q66nk\" (UID: \"8a77bade-cffa-4d3e-998b-b60a1cabf4f7\") " pod="openstack/ironic-c179-account-create-update-q66nk" Mar 13 13:10:43.540832 master-0 kubenswrapper[28149]: I0313 13:10:43.540668 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkmp2\" (UniqueName: \"kubernetes.io/projected/8a77bade-cffa-4d3e-998b-b60a1cabf4f7-kube-api-access-xkmp2\") pod \"ironic-c179-account-create-update-q66nk\" (UID: \"8a77bade-cffa-4d3e-998b-b60a1cabf4f7\") " pod="openstack/ironic-c179-account-create-update-q66nk" Mar 13 13:10:43.746922 master-0 kubenswrapper[28149]: I0313 13:10:43.736792 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a77bade-cffa-4d3e-998b-b60a1cabf4f7-operator-scripts\") pod \"ironic-c179-account-create-update-q66nk\" (UID: \"8a77bade-cffa-4d3e-998b-b60a1cabf4f7\") " pod="openstack/ironic-c179-account-create-update-q66nk" Mar 13 13:10:43.748435 master-0 kubenswrapper[28149]: I0313 13:10:43.748401 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:43.769125 master-0 kubenswrapper[28149]: I0313 13:10:43.767073 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c86f65b7c-hdnsw"] Mar 13 13:10:43.773369 master-0 kubenswrapper[28149]: I0313 13:10:43.773326 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tfhs6" event={"ID":"e2b3d9c8-1d1e-425b-9780-d9ad7b26318a","Type":"ContainerDied","Data":"e9ea5f6065dcf68f302fe88c1dd5ef0c6c66b2db9e4c930cb5449835c8cdbaa0"} Mar 13 13:10:43.773369 master-0 kubenswrapper[28149]: I0313 13:10:43.773371 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9ea5f6065dcf68f302fe88c1dd5ef0c6c66b2db9e4c930cb5449835c8cdbaa0" Mar 13 13:10:43.773526 master-0 kubenswrapper[28149]: I0313 13:10:43.773430 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tfhs6" Mar 13 13:10:43.784395 master-0 kubenswrapper[28149]: I0313 13:10:43.784253 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkmp2\" (UniqueName: \"kubernetes.io/projected/8a77bade-cffa-4d3e-998b-b60a1cabf4f7-kube-api-access-xkmp2\") pod \"ironic-c179-account-create-update-q66nk\" (UID: \"8a77bade-cffa-4d3e-998b-b60a1cabf4f7\") " pod="openstack/ironic-c179-account-create-update-q66nk" Mar 13 13:10:43.799685 master-0 kubenswrapper[28149]: I0313 13:10:43.799630 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jpjd2" event={"ID":"736c6577-449b-4b8d-8bfa-3dfbcc259e94","Type":"ContainerStarted","Data":"802a4eb8fd604946a8eb48f1fd0e2593b37275e5f3465a341033c567e666f3ef"} Mar 13 13:10:43.805028 master-0 kubenswrapper[28149]: I0313 13:10:43.804960 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57cdddf645-cckjh"] Mar 13 13:10:43.807392 master-0 kubenswrapper[28149]: I0313 13:10:43.807341 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:43.815299 master-0 kubenswrapper[28149]: I0313 13:10:43.814520 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" event={"ID":"74332268-d102-4a07-9298-c7cf2005cee5","Type":"ContainerStarted","Data":"b4bea7059d8986de008dfd6d8575ed7a91cff200e94dfd56a995cd03e0701810"} Mar 13 13:10:43.815299 master-0 kubenswrapper[28149]: I0313 13:10:43.814718 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" podUID="74332268-d102-4a07-9298-c7cf2005cee5" containerName="dnsmasq-dns" containerID="cri-o://b4bea7059d8986de008dfd6d8575ed7a91cff200e94dfd56a995cd03e0701810" gracePeriod=10 Mar 13 13:10:43.815299 master-0 kubenswrapper[28149]: I0313 13:10:43.815200 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:43.826612 master-0 kubenswrapper[28149]: I0313 13:10:43.826534 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57cdddf645-cckjh"] Mar 13 13:10:43.850037 master-0 kubenswrapper[28149]: I0313 13:10:43.849956 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-combined-ca-bundle\") pod \"placement-db-sync-wtgql\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:43.850866 master-0 kubenswrapper[28149]: I0313 13:10:43.850437 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-scripts\") pod \"placement-db-sync-wtgql\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:43.850866 master-0 kubenswrapper[28149]: I0313 13:10:43.850768 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2pkf\" (UniqueName: \"kubernetes.io/projected/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-kube-api-access-q2pkf\") pod \"placement-db-sync-wtgql\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:43.851394 master-0 kubenswrapper[28149]: I0313 13:10:43.850951 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-config-data\") pod \"placement-db-sync-wtgql\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:43.851394 master-0 kubenswrapper[28149]: I0313 13:10:43.851036 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-logs\") pod \"placement-db-sync-wtgql\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:43.882836 master-0 kubenswrapper[28149]: I0313 13:10:43.882508 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-config-data" (OuterVolumeSpecName: "config-data") pod "e2b3d9c8-1d1e-425b-9780-d9ad7b26318a" (UID: "e2b3d9c8-1d1e-425b-9780-d9ad7b26318a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:43.915314 master-0 kubenswrapper[28149]: I0313 13:10:43.915261 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c179-account-create-update-q66nk" Mar 13 13:10:43.956352 master-0 kubenswrapper[28149]: I0313 13:10:43.955312 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-dns-swift-storage-0\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:43.956352 master-0 kubenswrapper[28149]: I0313 13:10:43.955387 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-config\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:43.956352 master-0 kubenswrapper[28149]: I0313 13:10:43.955435 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-logs\") pod \"placement-db-sync-wtgql\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:43.956352 master-0 kubenswrapper[28149]: I0313 13:10:43.955518 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8b65\" (UniqueName: \"kubernetes.io/projected/71996c77-565a-4c9a-b654-742f00c3095b-kube-api-access-n8b65\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:43.956352 master-0 kubenswrapper[28149]: I0313 13:10:43.955778 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-combined-ca-bundle\") pod \"placement-db-sync-wtgql\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:43.956352 master-0 kubenswrapper[28149]: I0313 13:10:43.955901 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-dns-svc\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:43.956352 master-0 kubenswrapper[28149]: I0313 13:10:43.956038 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-scripts\") pod \"placement-db-sync-wtgql\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:43.956352 master-0 kubenswrapper[28149]: I0313 13:10:43.956315 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-logs\") pod \"placement-db-sync-wtgql\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:43.958816 master-0 kubenswrapper[28149]: I0313 13:10:43.958769 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-ovsdbserver-nb\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:43.958907 master-0 kubenswrapper[28149]: I0313 13:10:43.958882 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2pkf\" (UniqueName: \"kubernetes.io/projected/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-kube-api-access-q2pkf\") pod \"placement-db-sync-wtgql\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:43.959013 master-0 kubenswrapper[28149]: I0313 13:10:43.958980 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-ovsdbserver-sb\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:43.959069 master-0 kubenswrapper[28149]: I0313 13:10:43.959037 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-config-data\") pod \"placement-db-sync-wtgql\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:43.959449 master-0 kubenswrapper[28149]: I0313 13:10:43.959407 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:43.976501 master-0 kubenswrapper[28149]: I0313 13:10:43.975682 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" podStartSLOduration=4.975641261 podStartE2EDuration="4.975641261s" podCreationTimestamp="2026-03-13 13:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:10:43.877174339 +0000 UTC m=+1017.530639498" watchObservedRunningTime="2026-03-13 13:10:43.975641261 +0000 UTC m=+1017.629106440" Mar 13 13:10:44.313918 master-0 kubenswrapper[28149]: I0313 13:10:44.313436 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-combined-ca-bundle\") pod \"placement-db-sync-wtgql\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:44.320647 master-0 kubenswrapper[28149]: I0313 13:10:44.317292 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2pkf\" (UniqueName: \"kubernetes.io/projected/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-kube-api-access-q2pkf\") pod \"placement-db-sync-wtgql\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:44.325441 master-0 kubenswrapper[28149]: I0313 13:10:44.325387 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-dns-swift-storage-0\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:44.325618 master-0 kubenswrapper[28149]: I0313 13:10:44.325605 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-config\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:44.325798 master-0 kubenswrapper[28149]: I0313 13:10:44.325784 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8b65\" (UniqueName: \"kubernetes.io/projected/71996c77-565a-4c9a-b654-742f00c3095b-kube-api-access-n8b65\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:44.326160 master-0 kubenswrapper[28149]: I0313 13:10:44.326027 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-scripts\") pod \"placement-db-sync-wtgql\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:44.328121 master-0 kubenswrapper[28149]: I0313 13:10:44.327458 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-config-data\") pod \"placement-db-sync-wtgql\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:44.328495 master-0 kubenswrapper[28149]: I0313 13:10:44.328458 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-dns-svc\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:44.328795 master-0 kubenswrapper[28149]: I0313 13:10:44.328688 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-ovsdbserver-nb\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:44.330258 master-0 kubenswrapper[28149]: I0313 13:10:44.328865 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-ovsdbserver-sb\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:44.331703 master-0 kubenswrapper[28149]: I0313 13:10:44.331637 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-dns-swift-storage-0\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:44.338181 master-0 kubenswrapper[28149]: I0313 13:10:44.337421 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-config\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:44.338181 master-0 kubenswrapper[28149]: I0313 13:10:44.337820 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-dns-svc\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:44.338181 master-0 kubenswrapper[28149]: I0313 13:10:44.337833 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-ovsdbserver-nb\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:44.339659 master-0 kubenswrapper[28149]: I0313 13:10:44.339470 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-ovsdbserver-sb\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:44.355472 master-0 kubenswrapper[28149]: I0313 13:10:44.353943 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8b65\" (UniqueName: \"kubernetes.io/projected/71996c77-565a-4c9a-b654-742f00c3095b-kube-api-access-n8b65\") pod \"dnsmasq-dns-57cdddf645-cckjh\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:44.397801 master-0 kubenswrapper[28149]: I0313 13:10:44.397755 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wtgql" Mar 13 13:10:44.399964 master-0 kubenswrapper[28149]: I0313 13:10:44.398777 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:44.875574 master-0 kubenswrapper[28149]: I0313 13:10:44.875355 28149 generic.go:334] "Generic (PLEG): container finished" podID="74332268-d102-4a07-9298-c7cf2005cee5" containerID="b4bea7059d8986de008dfd6d8575ed7a91cff200e94dfd56a995cd03e0701810" exitCode=0 Mar 13 13:10:44.875574 master-0 kubenswrapper[28149]: I0313 13:10:44.875435 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" event={"ID":"74332268-d102-4a07-9298-c7cf2005cee5","Type":"ContainerDied","Data":"b4bea7059d8986de008dfd6d8575ed7a91cff200e94dfd56a995cd03e0701810"} Mar 13 13:10:44.896118 master-0 kubenswrapper[28149]: I0313 13:10:44.896053 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jpjd2" event={"ID":"736c6577-449b-4b8d-8bfa-3dfbcc259e94","Type":"ContainerStarted","Data":"9e9d3431ab722a51a8be3b8442bbcf269a950063e2d61434dfd3760b4c3ccf4f"} Mar 13 13:10:45.116487 master-0 kubenswrapper[28149]: I0313 13:10:45.111820 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jpjd2" podStartSLOduration=4.111799026 podStartE2EDuration="4.111799026s" podCreationTimestamp="2026-03-13 13:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:10:45.081470698 +0000 UTC m=+1018.734935857" watchObservedRunningTime="2026-03-13 13:10:45.111799026 +0000 UTC m=+1018.765264185" Mar 13 13:10:45.204407 master-0 kubenswrapper[28149]: I0313 13:10:45.204353 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c86f65b7c-hdnsw"] Mar 13 13:10:45.237430 master-0 kubenswrapper[28149]: I0313 13:10:45.237379 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:45.268187 master-0 kubenswrapper[28149]: I0313 13:10:45.266567 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-ovsdbserver-sb\") pod \"74332268-d102-4a07-9298-c7cf2005cee5\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " Mar 13 13:10:45.268187 master-0 kubenswrapper[28149]: I0313 13:10:45.266783 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-dns-swift-storage-0\") pod \"74332268-d102-4a07-9298-c7cf2005cee5\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " Mar 13 13:10:45.268187 master-0 kubenswrapper[28149]: I0313 13:10:45.266877 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-config\") pod \"74332268-d102-4a07-9298-c7cf2005cee5\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " Mar 13 13:10:45.268187 master-0 kubenswrapper[28149]: I0313 13:10:45.266909 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-ovsdbserver-nb\") pod \"74332268-d102-4a07-9298-c7cf2005cee5\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " Mar 13 13:10:45.268187 master-0 kubenswrapper[28149]: I0313 13:10:45.267057 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb4fd\" (UniqueName: \"kubernetes.io/projected/74332268-d102-4a07-9298-c7cf2005cee5-kube-api-access-hb4fd\") pod \"74332268-d102-4a07-9298-c7cf2005cee5\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " Mar 13 13:10:45.268187 master-0 kubenswrapper[28149]: I0313 13:10:45.267122 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-dns-svc\") pod \"74332268-d102-4a07-9298-c7cf2005cee5\" (UID: \"74332268-d102-4a07-9298-c7cf2005cee5\") " Mar 13 13:10:45.469464 master-0 kubenswrapper[28149]: I0313 13:10:45.466655 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74332268-d102-4a07-9298-c7cf2005cee5-kube-api-access-hb4fd" (OuterVolumeSpecName: "kube-api-access-hb4fd") pod "74332268-d102-4a07-9298-c7cf2005cee5" (UID: "74332268-d102-4a07-9298-c7cf2005cee5"). InnerVolumeSpecName "kube-api-access-hb4fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:45.502434 master-0 kubenswrapper[28149]: I0313 13:10:45.494348 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb4fd\" (UniqueName: \"kubernetes.io/projected/74332268-d102-4a07-9298-c7cf2005cee5-kube-api-access-hb4fd\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:45.618039 master-0 kubenswrapper[28149]: I0313 13:10:45.586203 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-db-sync-tv65l"] Mar 13 13:10:45.618039 master-0 kubenswrapper[28149]: I0313 13:10:45.601274 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "74332268-d102-4a07-9298-c7cf2005cee5" (UID: "74332268-d102-4a07-9298-c7cf2005cee5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:45.660746 master-0 kubenswrapper[28149]: I0313 13:10:45.654349 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "74332268-d102-4a07-9298-c7cf2005cee5" (UID: "74332268-d102-4a07-9298-c7cf2005cee5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:45.677126 master-0 kubenswrapper[28149]: I0313 13:10:45.670488 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "74332268-d102-4a07-9298-c7cf2005cee5" (UID: "74332268-d102-4a07-9298-c7cf2005cee5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:45.717524 master-0 kubenswrapper[28149]: I0313 13:10:45.710179 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-config" (OuterVolumeSpecName: "config") pod "74332268-d102-4a07-9298-c7cf2005cee5" (UID: "74332268-d102-4a07-9298-c7cf2005cee5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:45.813189 master-0 kubenswrapper[28149]: I0313 13:10:45.797969 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-f5dvd"] Mar 13 13:10:45.863377 master-0 kubenswrapper[28149]: I0313 13:10:45.861525 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "74332268-d102-4a07-9298-c7cf2005cee5" (UID: "74332268-d102-4a07-9298-c7cf2005cee5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:45.907825 master-0 kubenswrapper[28149]: I0313 13:10:45.905529 28149 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:45.907825 master-0 kubenswrapper[28149]: I0313 13:10:45.905590 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:45.907825 master-0 kubenswrapper[28149]: I0313 13:10:45.905605 28149 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:45.907825 master-0 kubenswrapper[28149]: I0313 13:10:45.905619 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:45.961673 master-0 kubenswrapper[28149]: I0313 13:10:45.961288 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kdkvp"] Mar 13 13:10:46.010723 master-0 kubenswrapper[28149]: I0313 13:10:46.009352 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74332268-d102-4a07-9298-c7cf2005cee5-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:46.015897 master-0 kubenswrapper[28149]: I0313 13:10:46.015840 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-f5dvd" event={"ID":"2134b334-b48d-46c6-91b6-a824c323d789","Type":"ContainerStarted","Data":"5ff049d719f6398f0a653a55db7cbc234617664ede84f23f0f2a70bcbe02248b"} Mar 13 13:10:46.032871 master-0 kubenswrapper[28149]: I0313 13:10:46.032789 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57cdddf645-cckjh"] Mar 13 13:10:46.047862 master-0 kubenswrapper[28149]: I0313 13:10:46.047794 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" event={"ID":"74332268-d102-4a07-9298-c7cf2005cee5","Type":"ContainerDied","Data":"dc181376a11d64845e2eca4afeb46c888c29c2cbd8c0d2ba61c30317ad96c8ef"} Mar 13 13:10:46.047862 master-0 kubenswrapper[28149]: I0313 13:10:46.047880 28149 scope.go:117] "RemoveContainer" containerID="b4bea7059d8986de008dfd6d8575ed7a91cff200e94dfd56a995cd03e0701810" Mar 13 13:10:46.048438 master-0 kubenswrapper[28149]: I0313 13:10:46.048026 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bd79cd5f-hd84t" Mar 13 13:10:46.052521 master-0 kubenswrapper[28149]: I0313 13:10:46.052037 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc5fdb9b9-7mhs2"] Mar 13 13:10:46.053364 master-0 kubenswrapper[28149]: E0313 13:10:46.053335 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74332268-d102-4a07-9298-c7cf2005cee5" containerName="init" Mar 13 13:10:46.053465 master-0 kubenswrapper[28149]: I0313 13:10:46.053389 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="74332268-d102-4a07-9298-c7cf2005cee5" containerName="init" Mar 13 13:10:46.053528 master-0 kubenswrapper[28149]: E0313 13:10:46.053480 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74332268-d102-4a07-9298-c7cf2005cee5" containerName="dnsmasq-dns" Mar 13 13:10:46.053528 master-0 kubenswrapper[28149]: I0313 13:10:46.053490 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="74332268-d102-4a07-9298-c7cf2005cee5" containerName="dnsmasq-dns" Mar 13 13:10:46.053979 master-0 kubenswrapper[28149]: I0313 13:10:46.053918 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="74332268-d102-4a07-9298-c7cf2005cee5" containerName="dnsmasq-dns" Mar 13 13:10:46.067250 master-0 kubenswrapper[28149]: I0313 13:10:46.065156 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kdkvp" event={"ID":"c4483640-14d3-42de-bad4-48fe97f66cad","Type":"ContainerStarted","Data":"6b685ef367473edbbfca91901f442ef04856cb5bb7adfab4f42d001edf21e487"} Mar 13 13:10:46.067250 master-0 kubenswrapper[28149]: I0313 13:10:46.065269 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.074119 master-0 kubenswrapper[28149]: I0313 13:10:46.073504 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-db-sync-tv65l" event={"ID":"95eb9b96-2f27-4701-b62d-b7026cb009ec","Type":"ContainerStarted","Data":"468158cb2ff12896cce5ce56f27fbf1a572294755b65eeb0a543800828f8eec8"} Mar 13 13:10:46.099301 master-0 kubenswrapper[28149]: I0313 13:10:46.096103 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" event={"ID":"76e4472a-9fe8-452e-924c-7a88df7c1f7d","Type":"ContainerStarted","Data":"9b8eadf043a31b04c6595cd04896615c57c6020b5212752085cc5d75f67c81c7"} Mar 13 13:10:46.121335 master-0 kubenswrapper[28149]: I0313 13:10:46.121211 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc5fdb9b9-7mhs2"] Mar 13 13:10:46.130405 master-0 kubenswrapper[28149]: I0313 13:10:46.130230 28149 scope.go:117] "RemoveContainer" containerID="5d6fd29691d2926caa8269a5f26e9a1fe1d25ad6d7c3b09a7170af10352aff21" Mar 13 13:10:46.150286 master-0 kubenswrapper[28149]: I0313 13:10:46.147925 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wtgql"] Mar 13 13:10:46.155004 master-0 kubenswrapper[28149]: W0313 13:10:46.154951 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71996c77_565a_4c9a_b654_742f00c3095b.slice/crio-d0e2e75d51a605a616d103297e202cce0560669e2f6136d90cfbece39ca8fc79 WatchSource:0}: Error finding container d0e2e75d51a605a616d103297e202cce0560669e2f6136d90cfbece39ca8fc79: Status 404 returned error can't find the container with id d0e2e75d51a605a616d103297e202cce0560669e2f6136d90cfbece39ca8fc79 Mar 13 13:10:46.161074 master-0 kubenswrapper[28149]: I0313 13:10:46.161041 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-c179-account-create-update-q66nk"] Mar 13 13:10:46.175166 master-0 kubenswrapper[28149]: I0313 13:10:46.175110 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57cdddf645-cckjh"] Mar 13 13:10:46.201958 master-0 kubenswrapper[28149]: I0313 13:10:46.201909 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bd79cd5f-hd84t"] Mar 13 13:10:46.211841 master-0 kubenswrapper[28149]: I0313 13:10:46.211736 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bd79cd5f-hd84t"] Mar 13 13:10:46.214393 master-0 kubenswrapper[28149]: I0313 13:10:46.214100 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-dns-swift-storage-0\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.214393 master-0 kubenswrapper[28149]: I0313 13:10:46.214201 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-config\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.214393 master-0 kubenswrapper[28149]: I0313 13:10:46.214274 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-ovsdbserver-nb\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.214393 master-0 kubenswrapper[28149]: I0313 13:10:46.214311 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-dns-svc\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.214774 master-0 kubenswrapper[28149]: I0313 13:10:46.214434 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-ovsdbserver-sb\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.214774 master-0 kubenswrapper[28149]: I0313 13:10:46.214468 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j644f\" (UniqueName: \"kubernetes.io/projected/3e202aeb-6913-4506-ba76-63feb8748d60-kube-api-access-j644f\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.317402 master-0 kubenswrapper[28149]: W0313 13:10:46.317356 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a77bade_cffa_4d3e_998b_b60a1cabf4f7.slice/crio-0c7c2d727144447413fcdc43158a03e7483d9ff25ac9cf387661344907761a69 WatchSource:0}: Error finding container 0c7c2d727144447413fcdc43158a03e7483d9ff25ac9cf387661344907761a69: Status 404 returned error can't find the container with id 0c7c2d727144447413fcdc43158a03e7483d9ff25ac9cf387661344907761a69 Mar 13 13:10:46.324412 master-0 kubenswrapper[28149]: I0313 13:10:46.319760 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-config\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.324412 master-0 kubenswrapper[28149]: I0313 13:10:46.323542 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-ovsdbserver-nb\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.324412 master-0 kubenswrapper[28149]: I0313 13:10:46.323873 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-config\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.324412 master-0 kubenswrapper[28149]: I0313 13:10:46.324300 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-ovsdbserver-nb\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.324412 master-0 kubenswrapper[28149]: I0313 13:10:46.324384 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-dns-svc\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.324763 master-0 kubenswrapper[28149]: I0313 13:10:46.324670 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-ovsdbserver-sb\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.325161 master-0 kubenswrapper[28149]: I0313 13:10:46.325019 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-dns-svc\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.325161 master-0 kubenswrapper[28149]: I0313 13:10:46.325075 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j644f\" (UniqueName: \"kubernetes.io/projected/3e202aeb-6913-4506-ba76-63feb8748d60-kube-api-access-j644f\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.325290 master-0 kubenswrapper[28149]: I0313 13:10:46.325175 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-dns-swift-storage-0\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.326016 master-0 kubenswrapper[28149]: I0313 13:10:46.325952 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-ovsdbserver-sb\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.326393 master-0 kubenswrapper[28149]: I0313 13:10:46.326365 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-dns-swift-storage-0\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.353357 master-0 kubenswrapper[28149]: I0313 13:10:46.353229 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j644f\" (UniqueName: \"kubernetes.io/projected/3e202aeb-6913-4506-ba76-63feb8748d60-kube-api-access-j644f\") pod \"dnsmasq-dns-dc5fdb9b9-7mhs2\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.417912 master-0 kubenswrapper[28149]: I0313 13:10:46.417805 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:46.811174 master-0 kubenswrapper[28149]: I0313 13:10:46.806669 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74332268-d102-4a07-9298-c7cf2005cee5" path="/var/lib/kubelet/pods/74332268-d102-4a07-9298-c7cf2005cee5/volumes" Mar 13 13:10:47.152113 master-0 kubenswrapper[28149]: I0313 13:10:47.150575 28149 generic.go:334] "Generic (PLEG): container finished" podID="71996c77-565a-4c9a-b654-742f00c3095b" containerID="9a29f373d21dbc262351c413a4356bf73b8f4d099d6380876ffd1235c2620687" exitCode=0 Mar 13 13:10:47.152113 master-0 kubenswrapper[28149]: I0313 13:10:47.150672 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57cdddf645-cckjh" event={"ID":"71996c77-565a-4c9a-b654-742f00c3095b","Type":"ContainerDied","Data":"9a29f373d21dbc262351c413a4356bf73b8f4d099d6380876ffd1235c2620687"} Mar 13 13:10:47.152113 master-0 kubenswrapper[28149]: I0313 13:10:47.150706 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57cdddf645-cckjh" event={"ID":"71996c77-565a-4c9a-b654-742f00c3095b","Type":"ContainerStarted","Data":"d0e2e75d51a605a616d103297e202cce0560669e2f6136d90cfbece39ca8fc79"} Mar 13 13:10:47.167191 master-0 kubenswrapper[28149]: I0313 13:10:47.162249 28149 generic.go:334] "Generic (PLEG): container finished" podID="76e4472a-9fe8-452e-924c-7a88df7c1f7d" containerID="696d7903c9c0059b174435fe552f6bd3214637fac1fd82bae853912ed43df2ec" exitCode=0 Mar 13 13:10:47.167191 master-0 kubenswrapper[28149]: I0313 13:10:47.162346 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" event={"ID":"76e4472a-9fe8-452e-924c-7a88df7c1f7d","Type":"ContainerDied","Data":"696d7903c9c0059b174435fe552f6bd3214637fac1fd82bae853912ed43df2ec"} Mar 13 13:10:47.190160 master-0 kubenswrapper[28149]: I0313 13:10:47.189217 28149 generic.go:334] "Generic (PLEG): container finished" podID="2134b334-b48d-46c6-91b6-a824c323d789" containerID="f6d10a10829af79cc5c4569bc95ebcfe7e72c0e4729d998dce87daa879d324a0" exitCode=0 Mar 13 13:10:47.190160 master-0 kubenswrapper[28149]: I0313 13:10:47.189472 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-f5dvd" event={"ID":"2134b334-b48d-46c6-91b6-a824c323d789","Type":"ContainerDied","Data":"f6d10a10829af79cc5c4569bc95ebcfe7e72c0e4729d998dce87daa879d324a0"} Mar 13 13:10:47.223184 master-0 kubenswrapper[28149]: I0313 13:10:47.222806 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wtgql" event={"ID":"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3","Type":"ContainerStarted","Data":"f3df2b72a4aed7ba300d62b5f02ffbf6412c3c256f695db7a20f0a771464bf8e"} Mar 13 13:10:47.289412 master-0 kubenswrapper[28149]: I0313 13:10:47.288998 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c179-account-create-update-q66nk" event={"ID":"8a77bade-cffa-4d3e-998b-b60a1cabf4f7","Type":"ContainerStarted","Data":"0c7c2d727144447413fcdc43158a03e7483d9ff25ac9cf387661344907761a69"} Mar 13 13:10:47.306044 master-0 kubenswrapper[28149]: I0313 13:10:47.304541 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kdkvp" event={"ID":"c4483640-14d3-42de-bad4-48fe97f66cad","Type":"ContainerStarted","Data":"0e024204c925f81da8f7c22fb0191b551d163019d22ddf6e0606276ae2b14bf3"} Mar 13 13:10:47.356334 master-0 kubenswrapper[28149]: W0313 13:10:47.344206 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e202aeb_6913_4506_ba76_63feb8748d60.slice/crio-f7b63933386cfdb779c47799d353ad208e69de7ec1064fb5f78f06cfef6511d0 WatchSource:0}: Error finding container f7b63933386cfdb779c47799d353ad208e69de7ec1064fb5f78f06cfef6511d0: Status 404 returned error can't find the container with id f7b63933386cfdb779c47799d353ad208e69de7ec1064fb5f78f06cfef6511d0 Mar 13 13:10:47.420027 master-0 kubenswrapper[28149]: I0313 13:10:47.419971 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc5fdb9b9-7mhs2"] Mar 13 13:10:47.492921 master-0 kubenswrapper[28149]: I0313 13:10:47.492845 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:10:47.498161 master-0 kubenswrapper[28149]: I0313 13:10:47.495229 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.500208 master-0 kubenswrapper[28149]: I0313 13:10:47.498439 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-e6fbd-default-external-config-data" Mar 13 13:10:47.500208 master-0 kubenswrapper[28149]: I0313 13:10:47.498642 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 13 13:10:47.522749 master-0 kubenswrapper[28149]: I0313 13:10:47.522493 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:10:47.529562 master-0 kubenswrapper[28149]: I0313 13:10:47.528826 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-c179-account-create-update-q66nk" podStartSLOduration=5.52880169 podStartE2EDuration="5.52880169s" podCreationTimestamp="2026-03-13 13:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:10:47.355890642 +0000 UTC m=+1021.009355801" watchObservedRunningTime="2026-03-13 13:10:47.52880169 +0000 UTC m=+1021.182266849" Mar 13 13:10:47.553497 master-0 kubenswrapper[28149]: I0313 13:10:47.541759 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-kdkvp" podStartSLOduration=5.541736038 podStartE2EDuration="5.541736038s" podCreationTimestamp="2026-03-13 13:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:10:47.38698455 +0000 UTC m=+1021.040449729" watchObservedRunningTime="2026-03-13 13:10:47.541736038 +0000 UTC m=+1021.195201207" Mar 13 13:10:47.771068 master-0 kubenswrapper[28149]: I0313 13:10:47.771027 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-scripts\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.772286 master-0 kubenswrapper[28149]: I0313 13:10:47.772256 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-combined-ca-bundle\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.772714 master-0 kubenswrapper[28149]: I0313 13:10:47.772691 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.772853 master-0 kubenswrapper[28149]: I0313 13:10:47.772838 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55a16176-799c-4d89-bacd-018d4c6f3d5b-logs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.772979 master-0 kubenswrapper[28149]: I0313 13:10:47.772963 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvfht\" (UniqueName: \"kubernetes.io/projected/55a16176-799c-4d89-bacd-018d4c6f3d5b-kube-api-access-dvfht\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.773184 master-0 kubenswrapper[28149]: I0313 13:10:47.773171 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-config-data\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.773609 master-0 kubenswrapper[28149]: I0313 13:10:47.773590 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55a16176-799c-4d89-bacd-018d4c6f3d5b-httpd-run\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.831427 master-0 kubenswrapper[28149]: I0313 13:10:47.831272 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:47.895300 master-0 kubenswrapper[28149]: I0313 13:10:47.895094 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-config-data\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.895300 master-0 kubenswrapper[28149]: I0313 13:10:47.895245 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55a16176-799c-4d89-bacd-018d4c6f3d5b-httpd-run\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.899260 master-0 kubenswrapper[28149]: I0313 13:10:47.899220 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-scripts\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.900102 master-0 kubenswrapper[28149]: I0313 13:10:47.899286 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-combined-ca-bundle\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.902985 master-0 kubenswrapper[28149]: I0313 13:10:47.899565 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.902985 master-0 kubenswrapper[28149]: I0313 13:10:47.902535 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55a16176-799c-4d89-bacd-018d4c6f3d5b-logs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.902985 master-0 kubenswrapper[28149]: I0313 13:10:47.902621 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvfht\" (UniqueName: \"kubernetes.io/projected/55a16176-799c-4d89-bacd-018d4c6f3d5b-kube-api-access-dvfht\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.903291 master-0 kubenswrapper[28149]: I0313 13:10:47.901072 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55a16176-799c-4d89-bacd-018d4c6f3d5b-httpd-run\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.905159 master-0 kubenswrapper[28149]: I0313 13:10:47.905100 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55a16176-799c-4d89-bacd-018d4c6f3d5b-logs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.909350 master-0 kubenswrapper[28149]: I0313 13:10:47.909309 28149 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 13:10:47.909500 master-0 kubenswrapper[28149]: I0313 13:10:47.909368 28149 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/dd1664ebbf7aebe13570b4d7d33b7a2c8fb2cd6894f8d3c518cd1e549d5c6ec6/globalmount\"" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.916364 master-0 kubenswrapper[28149]: I0313 13:10:47.916254 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-config-data\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.929010 master-0 kubenswrapper[28149]: I0313 13:10:47.928953 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-scripts\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.936904 master-0 kubenswrapper[28149]: I0313 13:10:47.936844 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-combined-ca-bundle\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.973301 master-0 kubenswrapper[28149]: I0313 13:10:47.971209 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvfht\" (UniqueName: \"kubernetes.io/projected/55a16176-799c-4d89-bacd-018d4c6f3d5b-kube-api-access-dvfht\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:47.981465 master-0 kubenswrapper[28149]: E0313 13:10:47.980716 28149 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a77bade_cffa_4d3e_998b_b60a1cabf4f7.slice/crio-9a53a3049c78eff8b9ef0b79102c0a8a0feddc05ae925b1c9a75efd4dce4e238.scope\": RecentStats: unable to find data in memory cache]" Mar 13 13:10:48.005129 master-0 kubenswrapper[28149]: I0313 13:10:48.005060 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8b65\" (UniqueName: \"kubernetes.io/projected/71996c77-565a-4c9a-b654-742f00c3095b-kube-api-access-n8b65\") pod \"71996c77-565a-4c9a-b654-742f00c3095b\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " Mar 13 13:10:48.015102 master-0 kubenswrapper[28149]: I0313 13:10:48.015037 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:10:48.029034 master-0 kubenswrapper[28149]: I0313 13:10:48.024846 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-config\") pod \"71996c77-565a-4c9a-b654-742f00c3095b\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " Mar 13 13:10:48.029034 master-0 kubenswrapper[28149]: I0313 13:10:48.027240 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-dns-svc\") pod \"71996c77-565a-4c9a-b654-742f00c3095b\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " Mar 13 13:10:48.029034 master-0 kubenswrapper[28149]: I0313 13:10:48.027388 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-dns-swift-storage-0\") pod \"71996c77-565a-4c9a-b654-742f00c3095b\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " Mar 13 13:10:48.029034 master-0 kubenswrapper[28149]: I0313 13:10:48.027552 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-ovsdbserver-nb\") pod \"71996c77-565a-4c9a-b654-742f00c3095b\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " Mar 13 13:10:48.029034 master-0 kubenswrapper[28149]: I0313 13:10:48.027643 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-ovsdbserver-sb\") pod \"71996c77-565a-4c9a-b654-742f00c3095b\" (UID: \"71996c77-565a-4c9a-b654-742f00c3095b\") " Mar 13 13:10:48.038839 master-0 kubenswrapper[28149]: E0313 13:10:48.038791 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71996c77-565a-4c9a-b654-742f00c3095b" containerName="init" Mar 13 13:10:48.039440 master-0 kubenswrapper[28149]: I0313 13:10:48.039420 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="71996c77-565a-4c9a-b654-742f00c3095b" containerName="init" Mar 13 13:10:48.044293 master-0 kubenswrapper[28149]: I0313 13:10:48.043009 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:48.045733 master-0 kubenswrapper[28149]: I0313 13:10:48.045228 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="71996c77-565a-4c9a-b654-742f00c3095b" containerName="init" Mar 13 13:10:48.048991 master-0 kubenswrapper[28149]: E0313 13:10:48.048957 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e4472a-9fe8-452e-924c-7a88df7c1f7d" containerName="init" Mar 13 13:10:48.049151 master-0 kubenswrapper[28149]: I0313 13:10:48.049119 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e4472a-9fe8-452e-924c-7a88df7c1f7d" containerName="init" Mar 13 13:10:48.049621 master-0 kubenswrapper[28149]: I0313 13:10:48.049602 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e4472a-9fe8-452e-924c-7a88df7c1f7d" containerName="init" Mar 13 13:10:48.050654 master-0 kubenswrapper[28149]: I0313 13:10:48.050631 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:10:48.050895 master-0 kubenswrapper[28149]: I0313 13:10:48.050827 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.058766 master-0 kubenswrapper[28149]: I0313 13:10:48.058662 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-e6fbd-default-internal-config-data" Mar 13 13:10:48.125300 master-0 kubenswrapper[28149]: I0313 13:10:48.125217 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71996c77-565a-4c9a-b654-742f00c3095b-kube-api-access-n8b65" (OuterVolumeSpecName: "kube-api-access-n8b65") pod "71996c77-565a-4c9a-b654-742f00c3095b" (UID: "71996c77-565a-4c9a-b654-742f00c3095b"). InnerVolumeSpecName "kube-api-access-n8b65". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:48.132846 master-0 kubenswrapper[28149]: I0313 13:10:48.132784 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8b65\" (UniqueName: \"kubernetes.io/projected/71996c77-565a-4c9a-b654-742f00c3095b-kube-api-access-n8b65\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:48.210539 master-0 kubenswrapper[28149]: I0313 13:10:48.206170 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-config" (OuterVolumeSpecName: "config") pod "71996c77-565a-4c9a-b654-742f00c3095b" (UID: "71996c77-565a-4c9a-b654-742f00c3095b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:48.215525 master-0 kubenswrapper[28149]: I0313 13:10:48.215454 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "71996c77-565a-4c9a-b654-742f00c3095b" (UID: "71996c77-565a-4c9a-b654-742f00c3095b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:48.233876 master-0 kubenswrapper[28149]: I0313 13:10:48.233816 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-dns-swift-storage-0\") pod \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " Mar 13 13:10:48.234067 master-0 kubenswrapper[28149]: I0313 13:10:48.233981 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-config\") pod \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " Mar 13 13:10:48.234067 master-0 kubenswrapper[28149]: I0313 13:10:48.234006 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfdmr\" (UniqueName: \"kubernetes.io/projected/76e4472a-9fe8-452e-924c-7a88df7c1f7d-kube-api-access-pfdmr\") pod \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " Mar 13 13:10:48.234067 master-0 kubenswrapper[28149]: I0313 13:10:48.234047 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-ovsdbserver-sb\") pod \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " Mar 13 13:10:48.234222 master-0 kubenswrapper[28149]: I0313 13:10:48.234092 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-ovsdbserver-nb\") pod \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " Mar 13 13:10:48.234222 master-0 kubenswrapper[28149]: I0313 13:10:48.234119 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-dns-svc\") pod \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\" (UID: \"76e4472a-9fe8-452e-924c-7a88df7c1f7d\") " Mar 13 13:10:48.234521 master-0 kubenswrapper[28149]: I0313 13:10:48.234479 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-config-data\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.234638 master-0 kubenswrapper[28149]: I0313 13:10:48.234521 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-combined-ca-bundle\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.234638 master-0 kubenswrapper[28149]: I0313 13:10:48.234593 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-httpd-run\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.234724 master-0 kubenswrapper[28149]: I0313 13:10:48.234635 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m228b\" (UniqueName: \"kubernetes.io/projected/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-kube-api-access-m228b\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.234724 master-0 kubenswrapper[28149]: I0313 13:10:48.234707 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-logs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.235179 master-0 kubenswrapper[28149]: I0313 13:10:48.234740 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.235179 master-0 kubenswrapper[28149]: I0313 13:10:48.234762 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-scripts\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.235179 master-0 kubenswrapper[28149]: I0313 13:10:48.234923 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:48.235179 master-0 kubenswrapper[28149]: I0313 13:10:48.234942 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:48.238804 master-0 kubenswrapper[28149]: I0313 13:10:48.238717 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "71996c77-565a-4c9a-b654-742f00c3095b" (UID: "71996c77-565a-4c9a-b654-742f00c3095b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:48.250324 master-0 kubenswrapper[28149]: I0313 13:10:48.250268 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "71996c77-565a-4c9a-b654-742f00c3095b" (UID: "71996c77-565a-4c9a-b654-742f00c3095b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:48.250556 master-0 kubenswrapper[28149]: I0313 13:10:48.250459 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e4472a-9fe8-452e-924c-7a88df7c1f7d-kube-api-access-pfdmr" (OuterVolumeSpecName: "kube-api-access-pfdmr") pod "76e4472a-9fe8-452e-924c-7a88df7c1f7d" (UID: "76e4472a-9fe8-452e-924c-7a88df7c1f7d"). InnerVolumeSpecName "kube-api-access-pfdmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:48.268389 master-0 kubenswrapper[28149]: I0313 13:10:48.268270 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "71996c77-565a-4c9a-b654-742f00c3095b" (UID: "71996c77-565a-4c9a-b654-742f00c3095b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:48.283704 master-0 kubenswrapper[28149]: I0313 13:10:48.283639 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76e4472a-9fe8-452e-924c-7a88df7c1f7d" (UID: "76e4472a-9fe8-452e-924c-7a88df7c1f7d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:48.287779 master-0 kubenswrapper[28149]: I0313 13:10:48.287704 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "76e4472a-9fe8-452e-924c-7a88df7c1f7d" (UID: "76e4472a-9fe8-452e-924c-7a88df7c1f7d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:48.301837 master-0 kubenswrapper[28149]: I0313 13:10:48.301769 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "76e4472a-9fe8-452e-924c-7a88df7c1f7d" (UID: "76e4472a-9fe8-452e-924c-7a88df7c1f7d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:48.317496 master-0 kubenswrapper[28149]: I0313 13:10:48.317437 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "76e4472a-9fe8-452e-924c-7a88df7c1f7d" (UID: "76e4472a-9fe8-452e-924c-7a88df7c1f7d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:48.317496 master-0 kubenswrapper[28149]: I0313 13:10:48.317456 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-config" (OuterVolumeSpecName: "config") pod "76e4472a-9fe8-452e-924c-7a88df7c1f7d" (UID: "76e4472a-9fe8-452e-924c-7a88df7c1f7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:48.324572 master-0 kubenswrapper[28149]: I0313 13:10:48.324515 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" event={"ID":"76e4472a-9fe8-452e-924c-7a88df7c1f7d","Type":"ContainerDied","Data":"9b8eadf043a31b04c6595cd04896615c57c6020b5212752085cc5d75f67c81c7"} Mar 13 13:10:48.324762 master-0 kubenswrapper[28149]: I0313 13:10:48.324587 28149 scope.go:117] "RemoveContainer" containerID="696d7903c9c0059b174435fe552f6bd3214637fac1fd82bae853912ed43df2ec" Mar 13 13:10:48.324762 master-0 kubenswrapper[28149]: I0313 13:10:48.324587 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c86f65b7c-hdnsw" Mar 13 13:10:48.327794 master-0 kubenswrapper[28149]: I0313 13:10:48.327755 28149 generic.go:334] "Generic (PLEG): container finished" podID="3e202aeb-6913-4506-ba76-63feb8748d60" containerID="8b064974b1c0d3e17c6e1220ecf80ddcf5ddf54886bb12d75941a2f5ecc0cd1d" exitCode=0 Mar 13 13:10:48.327905 master-0 kubenswrapper[28149]: I0313 13:10:48.327843 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" event={"ID":"3e202aeb-6913-4506-ba76-63feb8748d60","Type":"ContainerDied","Data":"8b064974b1c0d3e17c6e1220ecf80ddcf5ddf54886bb12d75941a2f5ecc0cd1d"} Mar 13 13:10:48.327905 master-0 kubenswrapper[28149]: I0313 13:10:48.327878 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" event={"ID":"3e202aeb-6913-4506-ba76-63feb8748d60","Type":"ContainerStarted","Data":"f7b63933386cfdb779c47799d353ad208e69de7ec1064fb5f78f06cfef6511d0"} Mar 13 13:10:48.330995 master-0 kubenswrapper[28149]: I0313 13:10:48.330961 28149 generic.go:334] "Generic (PLEG): container finished" podID="8a77bade-cffa-4d3e-998b-b60a1cabf4f7" containerID="9a53a3049c78eff8b9ef0b79102c0a8a0feddc05ae925b1c9a75efd4dce4e238" exitCode=0 Mar 13 13:10:48.331102 master-0 kubenswrapper[28149]: I0313 13:10:48.331033 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c179-account-create-update-q66nk" event={"ID":"8a77bade-cffa-4d3e-998b-b60a1cabf4f7","Type":"ContainerDied","Data":"9a53a3049c78eff8b9ef0b79102c0a8a0feddc05ae925b1c9a75efd4dce4e238"} Mar 13 13:10:48.334540 master-0 kubenswrapper[28149]: I0313 13:10:48.334495 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57cdddf645-cckjh" event={"ID":"71996c77-565a-4c9a-b654-742f00c3095b","Type":"ContainerDied","Data":"d0e2e75d51a605a616d103297e202cce0560669e2f6136d90cfbece39ca8fc79"} Mar 13 13:10:48.334725 master-0 kubenswrapper[28149]: I0313 13:10:48.334623 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57cdddf645-cckjh" Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.341476 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.342149 28149 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.342194 28149 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/820c05b4f9c429c0b1c354ead4f7cbf32abe82e5746431ea131598f6d233206f/globalmount\"" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.342461 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-scripts\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.343091 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-config-data\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.343177 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-combined-ca-bundle\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.343391 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-httpd-run\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.343539 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m228b\" (UniqueName: \"kubernetes.io/projected/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-kube-api-access-m228b\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.343712 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-logs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.343837 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.343852 28149 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.343862 28149 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.343873 28149 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71996c77-565a-4c9a-b654-742f00c3095b-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.343884 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.343894 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfdmr\" (UniqueName: \"kubernetes.io/projected/76e4472a-9fe8-452e-924c-7a88df7c1f7d-kube-api-access-pfdmr\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:48.344451 master-0 kubenswrapper[28149]: I0313 13:10:48.343904 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:48.345986 master-0 kubenswrapper[28149]: I0313 13:10:48.343912 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:48.345986 master-0 kubenswrapper[28149]: I0313 13:10:48.345185 28149 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e4472a-9fe8-452e-924c-7a88df7c1f7d-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:48.345986 master-0 kubenswrapper[28149]: I0313 13:10:48.344905 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-httpd-run\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.345986 master-0 kubenswrapper[28149]: I0313 13:10:48.345584 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-logs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.349936 master-0 kubenswrapper[28149]: I0313 13:10:48.349493 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-config-data\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.354649 master-0 kubenswrapper[28149]: I0313 13:10:48.354587 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-scripts\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.362073 master-0 kubenswrapper[28149]: I0313 13:10:48.361983 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-combined-ca-bundle\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.424891 master-0 kubenswrapper[28149]: I0313 13:10:48.417643 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m228b\" (UniqueName: \"kubernetes.io/projected/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-kube-api-access-m228b\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:48.439295 master-0 kubenswrapper[28149]: I0313 13:10:48.439256 28149 scope.go:117] "RemoveContainer" containerID="9a29f373d21dbc262351c413a4356bf73b8f4d099d6380876ffd1235c2620687" Mar 13 13:10:48.683757 master-0 kubenswrapper[28149]: I0313 13:10:48.683584 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c86f65b7c-hdnsw"] Mar 13 13:10:48.872472 master-0 kubenswrapper[28149]: I0313 13:10:48.872432 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c86f65b7c-hdnsw"] Mar 13 13:10:48.956905 master-0 kubenswrapper[28149]: I0313 13:10:48.945809 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57cdddf645-cckjh"] Mar 13 13:10:48.957920 master-0 kubenswrapper[28149]: I0313 13:10:48.957850 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57cdddf645-cckjh"] Mar 13 13:10:49.061544 master-0 kubenswrapper[28149]: I0313 13:10:49.056193 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:10:49.061544 master-0 kubenswrapper[28149]: E0313 13:10:49.059119 28149 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-e6fbd-default-external-api-0" podUID="55a16176-799c-4d89-bacd-018d4c6f3d5b" Mar 13 13:10:49.097169 master-0 kubenswrapper[28149]: I0313 13:10:49.091152 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:10:49.097169 master-0 kubenswrapper[28149]: E0313 13:10:49.092258 28149 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-e6fbd-default-internal-api-0" podUID="3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22" Mar 13 13:10:49.184847 master-0 kubenswrapper[28149]: I0313 13:10:49.184681 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-f5dvd" Mar 13 13:10:49.521662 master-0 kubenswrapper[28149]: I0313 13:10:49.521593 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-f5dvd" event={"ID":"2134b334-b48d-46c6-91b6-a824c323d789","Type":"ContainerDied","Data":"5ff049d719f6398f0a653a55db7cbc234617664ede84f23f0f2a70bcbe02248b"} Mar 13 13:10:49.521662 master-0 kubenswrapper[28149]: I0313 13:10:49.521661 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ff049d719f6398f0a653a55db7cbc234617664ede84f23f0f2a70bcbe02248b" Mar 13 13:10:49.523298 master-0 kubenswrapper[28149]: I0313 13:10:49.521743 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-f5dvd" Mar 13 13:10:49.551052 master-0 kubenswrapper[28149]: I0313 13:10:49.550012 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:49.551839 master-0 kubenswrapper[28149]: I0313 13:10:49.551388 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:49.572577 master-0 kubenswrapper[28149]: I0313 13:10:49.572259 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2134b334-b48d-46c6-91b6-a824c323d789-operator-scripts\") pod \"2134b334-b48d-46c6-91b6-a824c323d789\" (UID: \"2134b334-b48d-46c6-91b6-a824c323d789\") " Mar 13 13:10:49.573953 master-0 kubenswrapper[28149]: I0313 13:10:49.572947 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvlpq\" (UniqueName: \"kubernetes.io/projected/2134b334-b48d-46c6-91b6-a824c323d789-kube-api-access-bvlpq\") pod \"2134b334-b48d-46c6-91b6-a824c323d789\" (UID: \"2134b334-b48d-46c6-91b6-a824c323d789\") " Mar 13 13:10:49.573953 master-0 kubenswrapper[28149]: I0313 13:10:49.573451 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2134b334-b48d-46c6-91b6-a824c323d789-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2134b334-b48d-46c6-91b6-a824c323d789" (UID: "2134b334-b48d-46c6-91b6-a824c323d789"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:49.575272 master-0 kubenswrapper[28149]: I0313 13:10:49.574659 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2134b334-b48d-46c6-91b6-a824c323d789-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:49.588167 master-0 kubenswrapper[28149]: I0313 13:10:49.583303 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2134b334-b48d-46c6-91b6-a824c323d789-kube-api-access-bvlpq" (OuterVolumeSpecName: "kube-api-access-bvlpq") pod "2134b334-b48d-46c6-91b6-a824c323d789" (UID: "2134b334-b48d-46c6-91b6-a824c323d789"). InnerVolumeSpecName "kube-api-access-bvlpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:49.588167 master-0 kubenswrapper[28149]: I0313 13:10:49.587555 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:49.655496 master-0 kubenswrapper[28149]: I0313 13:10:49.649885 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:49.678372 master-0 kubenswrapper[28149]: I0313 13:10:49.678327 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-config-data\") pod \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " Mar 13 13:10:49.678600 master-0 kubenswrapper[28149]: I0313 13:10:49.678403 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-config-data\") pod \"55a16176-799c-4d89-bacd-018d4c6f3d5b\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " Mar 13 13:10:49.678600 master-0 kubenswrapper[28149]: I0313 13:10:49.678451 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-scripts\") pod \"55a16176-799c-4d89-bacd-018d4c6f3d5b\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " Mar 13 13:10:49.678600 master-0 kubenswrapper[28149]: I0313 13:10:49.678472 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m228b\" (UniqueName: \"kubernetes.io/projected/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-kube-api-access-m228b\") pod \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " Mar 13 13:10:49.678600 master-0 kubenswrapper[28149]: I0313 13:10:49.678512 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55a16176-799c-4d89-bacd-018d4c6f3d5b-httpd-run\") pod \"55a16176-799c-4d89-bacd-018d4c6f3d5b\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " Mar 13 13:10:49.678600 master-0 kubenswrapper[28149]: I0313 13:10:49.678541 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-combined-ca-bundle\") pod \"55a16176-799c-4d89-bacd-018d4c6f3d5b\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " Mar 13 13:10:49.678600 master-0 kubenswrapper[28149]: I0313 13:10:49.678565 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-scripts\") pod \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " Mar 13 13:10:49.678782 master-0 kubenswrapper[28149]: I0313 13:10:49.678602 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-logs\") pod \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " Mar 13 13:10:49.678782 master-0 kubenswrapper[28149]: I0313 13:10:49.678644 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-httpd-run\") pod \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " Mar 13 13:10:49.678782 master-0 kubenswrapper[28149]: I0313 13:10:49.678714 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55a16176-799c-4d89-bacd-018d4c6f3d5b-logs\") pod \"55a16176-799c-4d89-bacd-018d4c6f3d5b\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " Mar 13 13:10:49.678887 master-0 kubenswrapper[28149]: I0313 13:10:49.678823 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvfht\" (UniqueName: \"kubernetes.io/projected/55a16176-799c-4d89-bacd-018d4c6f3d5b-kube-api-access-dvfht\") pod \"55a16176-799c-4d89-bacd-018d4c6f3d5b\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " Mar 13 13:10:49.679465 master-0 kubenswrapper[28149]: I0313 13:10:49.679415 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a16176-799c-4d89-bacd-018d4c6f3d5b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "55a16176-799c-4d89-bacd-018d4c6f3d5b" (UID: "55a16176-799c-4d89-bacd-018d4c6f3d5b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:10:49.679549 master-0 kubenswrapper[28149]: I0313 13:10:49.679490 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvlpq\" (UniqueName: \"kubernetes.io/projected/2134b334-b48d-46c6-91b6-a824c323d789-kube-api-access-bvlpq\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:49.681688 master-0 kubenswrapper[28149]: I0313 13:10:49.681616 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22" (UID: "3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:10:49.682194 master-0 kubenswrapper[28149]: I0313 13:10:49.682166 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a16176-799c-4d89-bacd-018d4c6f3d5b-logs" (OuterVolumeSpecName: "logs") pod "55a16176-799c-4d89-bacd-018d4c6f3d5b" (UID: "55a16176-799c-4d89-bacd-018d4c6f3d5b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:10:49.683663 master-0 kubenswrapper[28149]: I0313 13:10:49.683606 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-config-data" (OuterVolumeSpecName: "config-data") pod "3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22" (UID: "3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:49.684995 master-0 kubenswrapper[28149]: I0313 13:10:49.684954 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-logs" (OuterVolumeSpecName: "logs") pod "3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22" (UID: "3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:10:49.686581 master-0 kubenswrapper[28149]: I0313 13:10:49.686273 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55a16176-799c-4d89-bacd-018d4c6f3d5b" (UID: "55a16176-799c-4d89-bacd-018d4c6f3d5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:49.687623 master-0 kubenswrapper[28149]: I0313 13:10:49.687589 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-kube-api-access-m228b" (OuterVolumeSpecName: "kube-api-access-m228b") pod "3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22" (UID: "3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22"). InnerVolumeSpecName "kube-api-access-m228b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:49.688304 master-0 kubenswrapper[28149]: I0313 13:10:49.688066 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-scripts" (OuterVolumeSpecName: "scripts") pod "55a16176-799c-4d89-bacd-018d4c6f3d5b" (UID: "55a16176-799c-4d89-bacd-018d4c6f3d5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:49.688555 master-0 kubenswrapper[28149]: I0313 13:10:49.688486 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a16176-799c-4d89-bacd-018d4c6f3d5b-kube-api-access-dvfht" (OuterVolumeSpecName: "kube-api-access-dvfht") pod "55a16176-799c-4d89-bacd-018d4c6f3d5b" (UID: "55a16176-799c-4d89-bacd-018d4c6f3d5b"). InnerVolumeSpecName "kube-api-access-dvfht". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:49.700356 master-0 kubenswrapper[28149]: I0313 13:10:49.700299 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-config-data" (OuterVolumeSpecName: "config-data") pod "55a16176-799c-4d89-bacd-018d4c6f3d5b" (UID: "55a16176-799c-4d89-bacd-018d4c6f3d5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:49.715108 master-0 kubenswrapper[28149]: I0313 13:10:49.715045 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-scripts" (OuterVolumeSpecName: "scripts") pod "3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22" (UID: "3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:49.814922 master-0 kubenswrapper[28149]: I0313 13:10:49.814736 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-combined-ca-bundle\") pod \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " Mar 13 13:10:49.816122 master-0 kubenswrapper[28149]: I0313 13:10:49.816096 28149 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55a16176-799c-4d89-bacd-018d4c6f3d5b-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:49.816286 master-0 kubenswrapper[28149]: I0313 13:10:49.816269 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvfht\" (UniqueName: \"kubernetes.io/projected/55a16176-799c-4d89-bacd-018d4c6f3d5b-kube-api-access-dvfht\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:49.816406 master-0 kubenswrapper[28149]: I0313 13:10:49.816389 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:49.816515 master-0 kubenswrapper[28149]: I0313 13:10:49.816500 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:49.816613 master-0 kubenswrapper[28149]: I0313 13:10:49.816597 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m228b\" (UniqueName: \"kubernetes.io/projected/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-kube-api-access-m228b\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:49.816733 master-0 kubenswrapper[28149]: I0313 13:10:49.816715 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:49.816829 master-0 kubenswrapper[28149]: I0313 13:10:49.816813 28149 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55a16176-799c-4d89-bacd-018d4c6f3d5b-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:49.816930 master-0 kubenswrapper[28149]: I0313 13:10:49.816914 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a16176-799c-4d89-bacd-018d4c6f3d5b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:49.817309 master-0 kubenswrapper[28149]: I0313 13:10:49.817293 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:49.817758 master-0 kubenswrapper[28149]: I0313 13:10:49.817741 28149 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:49.818084 master-0 kubenswrapper[28149]: I0313 13:10:49.818068 28149 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:49.839102 master-0 kubenswrapper[28149]: I0313 13:10:49.839011 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22" (UID: "3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:49.920528 master-0 kubenswrapper[28149]: I0313 13:10:49.920466 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:50.098980 master-0 kubenswrapper[28149]: I0313 13:10:50.098415 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:50.208338 master-0 kubenswrapper[28149]: I0313 13:10:50.208290 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c179-account-create-update-q66nk" Mar 13 13:10:50.229116 master-0 kubenswrapper[28149]: I0313 13:10:50.229052 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a77bade-cffa-4d3e-998b-b60a1cabf4f7-operator-scripts\") pod \"8a77bade-cffa-4d3e-998b-b60a1cabf4f7\" (UID: \"8a77bade-cffa-4d3e-998b-b60a1cabf4f7\") " Mar 13 13:10:50.229431 master-0 kubenswrapper[28149]: I0313 13:10:50.229260 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkmp2\" (UniqueName: \"kubernetes.io/projected/8a77bade-cffa-4d3e-998b-b60a1cabf4f7-kube-api-access-xkmp2\") pod \"8a77bade-cffa-4d3e-998b-b60a1cabf4f7\" (UID: \"8a77bade-cffa-4d3e-998b-b60a1cabf4f7\") " Mar 13 13:10:50.229726 master-0 kubenswrapper[28149]: I0313 13:10:50.229678 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a77bade-cffa-4d3e-998b-b60a1cabf4f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a77bade-cffa-4d3e-998b-b60a1cabf4f7" (UID: "8a77bade-cffa-4d3e-998b-b60a1cabf4f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:50.229879 master-0 kubenswrapper[28149]: I0313 13:10:50.229848 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a77bade-cffa-4d3e-998b-b60a1cabf4f7-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:50.233159 master-0 kubenswrapper[28149]: I0313 13:10:50.233098 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a77bade-cffa-4d3e-998b-b60a1cabf4f7-kube-api-access-xkmp2" (OuterVolumeSpecName: "kube-api-access-xkmp2") pod "8a77bade-cffa-4d3e-998b-b60a1cabf4f7" (UID: "8a77bade-cffa-4d3e-998b-b60a1cabf4f7"). InnerVolumeSpecName "kube-api-access-xkmp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:50.332558 master-0 kubenswrapper[28149]: I0313 13:10:50.332496 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkmp2\" (UniqueName: \"kubernetes.io/projected/8a77bade-cffa-4d3e-998b-b60a1cabf4f7-kube-api-access-xkmp2\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:50.566101 master-0 kubenswrapper[28149]: I0313 13:10:50.566005 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c179-account-create-update-q66nk" event={"ID":"8a77bade-cffa-4d3e-998b-b60a1cabf4f7","Type":"ContainerDied","Data":"0c7c2d727144447413fcdc43158a03e7483d9ff25ac9cf387661344907761a69"} Mar 13 13:10:50.566101 master-0 kubenswrapper[28149]: I0313 13:10:50.566100 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c7c2d727144447413fcdc43158a03e7483d9ff25ac9cf387661344907761a69" Mar 13 13:10:50.566787 master-0 kubenswrapper[28149]: I0313 13:10:50.566750 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c179-account-create-update-q66nk" Mar 13 13:10:50.572717 master-0 kubenswrapper[28149]: I0313 13:10:50.572663 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:50.573277 master-0 kubenswrapper[28149]: I0313 13:10:50.573228 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" event={"ID":"3e202aeb-6913-4506-ba76-63feb8748d60","Type":"ContainerStarted","Data":"18a8745e58a1e2ccfb89e849e5546947cae1ce0bdf6f20e7de69b1de1c8e5a3d"} Mar 13 13:10:50.573389 master-0 kubenswrapper[28149]: I0313 13:10:50.573325 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:50.573647 master-0 kubenswrapper[28149]: I0313 13:10:50.573624 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:50.709832 master-0 kubenswrapper[28149]: I0313 13:10:50.709533 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71996c77-565a-4c9a-b654-742f00c3095b" path="/var/lib/kubelet/pods/71996c77-565a-4c9a-b654-742f00c3095b/volumes" Mar 13 13:10:50.710813 master-0 kubenswrapper[28149]: I0313 13:10:50.710778 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e4472a-9fe8-452e-924c-7a88df7c1f7d" path="/var/lib/kubelet/pods/76e4472a-9fe8-452e-924c-7a88df7c1f7d/volumes" Mar 13 13:10:50.751683 master-0 kubenswrapper[28149]: I0313 13:10:50.751632 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"55a16176-799c-4d89-bacd-018d4c6f3d5b\" (UID: \"55a16176-799c-4d89-bacd-018d4c6f3d5b\") " Mar 13 13:10:51.370630 master-0 kubenswrapper[28149]: I0313 13:10:51.370505 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" podStartSLOduration=6.37048477 podStartE2EDuration="6.37048477s" podCreationTimestamp="2026-03-13 13:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:10:51.361654451 +0000 UTC m=+1025.015119610" watchObservedRunningTime="2026-03-13 13:10:51.37048477 +0000 UTC m=+1025.023949929" Mar 13 13:10:51.539182 master-0 kubenswrapper[28149]: I0313 13:10:51.539080 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:10:51.548041 master-0 kubenswrapper[28149]: I0313 13:10:51.544608 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:10:51.597237 master-0 kubenswrapper[28149]: I0313 13:10:51.596736 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:10:51.607736 master-0 kubenswrapper[28149]: E0313 13:10:51.597304 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a77bade-cffa-4d3e-998b-b60a1cabf4f7" containerName="mariadb-account-create-update" Mar 13 13:10:51.607736 master-0 kubenswrapper[28149]: I0313 13:10:51.597320 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a77bade-cffa-4d3e-998b-b60a1cabf4f7" containerName="mariadb-account-create-update" Mar 13 13:10:51.607736 master-0 kubenswrapper[28149]: E0313 13:10:51.597331 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2134b334-b48d-46c6-91b6-a824c323d789" containerName="mariadb-database-create" Mar 13 13:10:51.607736 master-0 kubenswrapper[28149]: I0313 13:10:51.597336 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="2134b334-b48d-46c6-91b6-a824c323d789" containerName="mariadb-database-create" Mar 13 13:10:51.607736 master-0 kubenswrapper[28149]: I0313 13:10:51.597576 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a77bade-cffa-4d3e-998b-b60a1cabf4f7" containerName="mariadb-account-create-update" Mar 13 13:10:51.607736 master-0 kubenswrapper[28149]: I0313 13:10:51.597601 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="2134b334-b48d-46c6-91b6-a824c323d789" containerName="mariadb-database-create" Mar 13 13:10:51.607736 master-0 kubenswrapper[28149]: I0313 13:10:51.603528 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.607736 master-0 kubenswrapper[28149]: I0313 13:10:51.607514 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:10:51.625616 master-0 kubenswrapper[28149]: I0313 13:10:51.625542 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-e6fbd-default-internal-config-data" Mar 13 13:10:51.713334 master-0 kubenswrapper[28149]: I0313 13:10:51.710408 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798" (OuterVolumeSpecName: "glance") pod "55a16176-799c-4d89-bacd-018d4c6f3d5b" (UID: "55a16176-799c-4d89-bacd-018d4c6f3d5b"). InnerVolumeSpecName "pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 13:10:51.768309 master-0 kubenswrapper[28149]: I0313 13:10:51.768254 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.792476 master-0 kubenswrapper[28149]: I0313 13:10:51.792283 28149 generic.go:334] "Generic (PLEG): container finished" podID="736c6577-449b-4b8d-8bfa-3dfbcc259e94" containerID="9e9d3431ab722a51a8be3b8442bbcf269a950063e2d61434dfd3760b4c3ccf4f" exitCode=0 Mar 13 13:10:51.793309 master-0 kubenswrapper[28149]: I0313 13:10:51.793193 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jpjd2" event={"ID":"736c6577-449b-4b8d-8bfa-3dfbcc259e94","Type":"ContainerDied","Data":"9e9d3431ab722a51a8be3b8442bbcf269a950063e2d61434dfd3760b4c3ccf4f"} Mar 13 13:10:51.817813 master-0 kubenswrapper[28149]: I0313 13:10:51.814075 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\" (UID: \"3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22\") " Mar 13 13:10:51.817813 master-0 kubenswrapper[28149]: I0313 13:10:51.814787 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4zxh\" (UniqueName: \"kubernetes.io/projected/862ca5ea-1489-4696-a698-ab9992caaa78-kube-api-access-s4zxh\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.817813 master-0 kubenswrapper[28149]: I0313 13:10:51.814916 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-combined-ca-bundle\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.817813 master-0 kubenswrapper[28149]: I0313 13:10:51.814974 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-scripts\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.817813 master-0 kubenswrapper[28149]: I0313 13:10:51.815058 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-config-data\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.817813 master-0 kubenswrapper[28149]: I0313 13:10:51.815129 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862ca5ea-1489-4696-a698-ab9992caaa78-logs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.817813 master-0 kubenswrapper[28149]: I0313 13:10:51.815360 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/862ca5ea-1489-4696-a698-ab9992caaa78-httpd-run\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.817813 master-0 kubenswrapper[28149]: I0313 13:10:51.815592 28149 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") on node \"master-0\" " Mar 13 13:10:51.865429 master-0 kubenswrapper[28149]: I0313 13:10:51.865372 28149 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 13 13:10:51.865715 master-0 kubenswrapper[28149]: I0313 13:10:51.865671 28149 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645" (UniqueName: "kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798") on node "master-0" Mar 13 13:10:51.872335 master-0 kubenswrapper[28149]: I0313 13:10:51.870856 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401" (OuterVolumeSpecName: "glance") pod "3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22" (UID: "3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22"). InnerVolumeSpecName "pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 13:10:51.935644 master-0 kubenswrapper[28149]: I0313 13:10:51.918148 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862ca5ea-1489-4696-a698-ab9992caaa78-logs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.935644 master-0 kubenswrapper[28149]: I0313 13:10:51.918252 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.935644 master-0 kubenswrapper[28149]: I0313 13:10:51.918365 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/862ca5ea-1489-4696-a698-ab9992caaa78-httpd-run\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.935644 master-0 kubenswrapper[28149]: I0313 13:10:51.918530 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4zxh\" (UniqueName: \"kubernetes.io/projected/862ca5ea-1489-4696-a698-ab9992caaa78-kube-api-access-s4zxh\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.935644 master-0 kubenswrapper[28149]: I0313 13:10:51.918598 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-combined-ca-bundle\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.935644 master-0 kubenswrapper[28149]: I0313 13:10:51.918642 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-scripts\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.935644 master-0 kubenswrapper[28149]: I0313 13:10:51.918665 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862ca5ea-1489-4696-a698-ab9992caaa78-logs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.935644 master-0 kubenswrapper[28149]: I0313 13:10:51.918695 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-config-data\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.935644 master-0 kubenswrapper[28149]: I0313 13:10:51.918787 28149 reconciler_common.go:293] "Volume detached for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:51.935644 master-0 kubenswrapper[28149]: I0313 13:10:51.920030 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/862ca5ea-1489-4696-a698-ab9992caaa78-httpd-run\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.935644 master-0 kubenswrapper[28149]: I0313 13:10:51.925262 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-combined-ca-bundle\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.935644 master-0 kubenswrapper[28149]: I0313 13:10:51.926024 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-config-data\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.935644 master-0 kubenswrapper[28149]: I0313 13:10:51.928228 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-scripts\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:51.955467 master-0 kubenswrapper[28149]: I0313 13:10:51.955380 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4zxh\" (UniqueName: \"kubernetes.io/projected/862ca5ea-1489-4696-a698-ab9992caaa78-kube-api-access-s4zxh\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:52.028862 master-0 kubenswrapper[28149]: I0313 13:10:52.013969 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:10:52.035332 master-0 kubenswrapper[28149]: I0313 13:10:52.034292 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:10:52.062095 master-0 kubenswrapper[28149]: I0313 13:10:52.061975 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:10:52.065203 master-0 kubenswrapper[28149]: I0313 13:10:52.065119 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.068069 master-0 kubenswrapper[28149]: I0313 13:10:52.067971 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-e6fbd-default-external-config-data" Mar 13 13:10:52.210741 master-0 kubenswrapper[28149]: I0313 13:10:52.210678 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:10:52.264884 master-0 kubenswrapper[28149]: I0313 13:10:52.264827 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-combined-ca-bundle\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.265243 master-0 kubenswrapper[28149]: I0313 13:10:52.265225 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n88hf\" (UniqueName: \"kubernetes.io/projected/cfc459b1-604b-46d5-be0f-792e3a66a7b1-kube-api-access-n88hf\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.265374 master-0 kubenswrapper[28149]: I0313 13:10:52.265357 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc459b1-604b-46d5-be0f-792e3a66a7b1-logs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.265534 master-0 kubenswrapper[28149]: I0313 13:10:52.265512 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc459b1-604b-46d5-be0f-792e3a66a7b1-httpd-run\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.265637 master-0 kubenswrapper[28149]: I0313 13:10:52.265624 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-config-data\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.265824 master-0 kubenswrapper[28149]: I0313 13:10:52.265746 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.267618 master-0 kubenswrapper[28149]: I0313 13:10:52.266029 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-scripts\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.370415 master-0 kubenswrapper[28149]: I0313 13:10:52.369624 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n88hf\" (UniqueName: \"kubernetes.io/projected/cfc459b1-604b-46d5-be0f-792e3a66a7b1-kube-api-access-n88hf\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.370415 master-0 kubenswrapper[28149]: I0313 13:10:52.369697 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc459b1-604b-46d5-be0f-792e3a66a7b1-logs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.370415 master-0 kubenswrapper[28149]: I0313 13:10:52.369756 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc459b1-604b-46d5-be0f-792e3a66a7b1-httpd-run\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.370415 master-0 kubenswrapper[28149]: I0313 13:10:52.369775 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-config-data\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.370415 master-0 kubenswrapper[28149]: I0313 13:10:52.369801 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.371696 master-0 kubenswrapper[28149]: I0313 13:10:52.371339 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc459b1-604b-46d5-be0f-792e3a66a7b1-logs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.372915 master-0 kubenswrapper[28149]: I0313 13:10:52.372042 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-scripts\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.372915 master-0 kubenswrapper[28149]: I0313 13:10:52.372193 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-combined-ca-bundle\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.373704 master-0 kubenswrapper[28149]: I0313 13:10:52.373683 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc459b1-604b-46d5-be0f-792e3a66a7b1-httpd-run\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.377288 master-0 kubenswrapper[28149]: I0313 13:10:52.377248 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-combined-ca-bundle\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.377961 master-0 kubenswrapper[28149]: I0313 13:10:52.377914 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-scripts\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.378805 master-0 kubenswrapper[28149]: I0313 13:10:52.378785 28149 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 13:10:52.378928 master-0 kubenswrapper[28149]: I0313 13:10:52.378908 28149 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/dd1664ebbf7aebe13570b4d7d33b7a2c8fb2cd6894f8d3c518cd1e549d5c6ec6/globalmount\"" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.382959 master-0 kubenswrapper[28149]: I0313 13:10:52.382787 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-config-data\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.400854 master-0 kubenswrapper[28149]: I0313 13:10:52.400464 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n88hf\" (UniqueName: \"kubernetes.io/projected/cfc459b1-604b-46d5-be0f-792e3a66a7b1-kube-api-access-n88hf\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:52.714966 master-0 kubenswrapper[28149]: I0313 13:10:52.714780 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22" path="/var/lib/kubelet/pods/3a5fc7a8-dbd8-4455-a5a5-81dac6aeaa22/volumes" Mar 13 13:10:52.715575 master-0 kubenswrapper[28149]: I0313 13:10:52.715406 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a16176-799c-4d89-bacd-018d4c6f3d5b" path="/var/lib/kubelet/pods/55a16176-799c-4d89-bacd-018d4c6f3d5b/volumes" Mar 13 13:10:53.380553 master-0 kubenswrapper[28149]: I0313 13:10:53.380167 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:53.578418 master-0 kubenswrapper[28149]: I0313 13:10:53.578358 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:53.578979 master-0 kubenswrapper[28149]: I0313 13:10:53.578953 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-h8h9t"] Mar 13 13:10:53.581564 master-0 kubenswrapper[28149]: I0313 13:10:53.581524 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.584187 master-0 kubenswrapper[28149]: I0313 13:10:53.584151 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Mar 13 13:10:53.584471 master-0 kubenswrapper[28149]: I0313 13:10:53.584364 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 13 13:10:53.612699 master-0 kubenswrapper[28149]: I0313 13:10:53.612535 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-combined-ca-bundle\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.614399 master-0 kubenswrapper[28149]: I0313 13:10:53.612618 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-scripts\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.614399 master-0 kubenswrapper[28149]: I0313 13:10:53.613664 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0b7e43c1-e19e-4691-a5b4-2a2197764944-etc-podinfo\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.614399 master-0 kubenswrapper[28149]: I0313 13:10:53.613717 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnlf9\" (UniqueName: \"kubernetes.io/projected/0b7e43c1-e19e-4691-a5b4-2a2197764944-kube-api-access-wnlf9\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.614399 master-0 kubenswrapper[28149]: I0313 13:10:53.613761 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0b7e43c1-e19e-4691-a5b4-2a2197764944-config-data-merged\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.614399 master-0 kubenswrapper[28149]: I0313 13:10:53.613799 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-config-data\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.619198 master-0 kubenswrapper[28149]: I0313 13:10:53.619114 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-h8h9t"] Mar 13 13:10:53.715217 master-0 kubenswrapper[28149]: I0313 13:10:53.715109 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-combined-ca-bundle\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.716304 master-0 kubenswrapper[28149]: I0313 13:10:53.715416 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-scripts\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.716304 master-0 kubenswrapper[28149]: I0313 13:10:53.715887 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0b7e43c1-e19e-4691-a5b4-2a2197764944-etc-podinfo\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.717157 master-0 kubenswrapper[28149]: I0313 13:10:53.716798 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnlf9\" (UniqueName: \"kubernetes.io/projected/0b7e43c1-e19e-4691-a5b4-2a2197764944-kube-api-access-wnlf9\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.717157 master-0 kubenswrapper[28149]: I0313 13:10:53.717041 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0b7e43c1-e19e-4691-a5b4-2a2197764944-config-data-merged\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.717526 master-0 kubenswrapper[28149]: I0313 13:10:53.717419 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-config-data\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.717781 master-0 kubenswrapper[28149]: I0313 13:10:53.717714 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0b7e43c1-e19e-4691-a5b4-2a2197764944-config-data-merged\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.720586 master-0 kubenswrapper[28149]: I0313 13:10:53.720513 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-scripts\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.721765 master-0 kubenswrapper[28149]: I0313 13:10:53.720920 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-config-data\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.721765 master-0 kubenswrapper[28149]: I0313 13:10:53.721707 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-combined-ca-bundle\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.724619 master-0 kubenswrapper[28149]: I0313 13:10:53.724529 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0b7e43c1-e19e-4691-a5b4-2a2197764944-etc-podinfo\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.743260 master-0 kubenswrapper[28149]: I0313 13:10:53.738081 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnlf9\" (UniqueName: \"kubernetes.io/projected/0b7e43c1-e19e-4691-a5b4-2a2197764944-kube-api-access-wnlf9\") pod \"ironic-db-sync-h8h9t\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:53.951029 master-0 kubenswrapper[28149]: I0313 13:10:53.949201 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:10:54.565948 master-0 kubenswrapper[28149]: I0313 13:10:54.565872 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:54.583749 master-0 kubenswrapper[28149]: I0313 13:10:54.582147 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:55.187275 master-0 kubenswrapper[28149]: I0313 13:10:55.185826 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:55.306588 master-0 kubenswrapper[28149]: I0313 13:10:55.306520 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-config-data\") pod \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " Mar 13 13:10:55.308413 master-0 kubenswrapper[28149]: I0313 13:10:55.308369 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-fernet-keys\") pod \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " Mar 13 13:10:55.308535 master-0 kubenswrapper[28149]: I0313 13:10:55.308446 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lph7g\" (UniqueName: \"kubernetes.io/projected/736c6577-449b-4b8d-8bfa-3dfbcc259e94-kube-api-access-lph7g\") pod \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " Mar 13 13:10:55.308619 master-0 kubenswrapper[28149]: I0313 13:10:55.308565 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-combined-ca-bundle\") pod \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " Mar 13 13:10:55.308756 master-0 kubenswrapper[28149]: I0313 13:10:55.308666 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-credential-keys\") pod \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " Mar 13 13:10:55.308852 master-0 kubenswrapper[28149]: I0313 13:10:55.308756 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-scripts\") pod \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\" (UID: \"736c6577-449b-4b8d-8bfa-3dfbcc259e94\") " Mar 13 13:10:55.313562 master-0 kubenswrapper[28149]: I0313 13:10:55.313358 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c6577-449b-4b8d-8bfa-3dfbcc259e94-kube-api-access-lph7g" (OuterVolumeSpecName: "kube-api-access-lph7g") pod "736c6577-449b-4b8d-8bfa-3dfbcc259e94" (UID: "736c6577-449b-4b8d-8bfa-3dfbcc259e94"). InnerVolumeSpecName "kube-api-access-lph7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:55.313562 master-0 kubenswrapper[28149]: I0313 13:10:55.313502 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "736c6577-449b-4b8d-8bfa-3dfbcc259e94" (UID: "736c6577-449b-4b8d-8bfa-3dfbcc259e94"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:55.313562 master-0 kubenswrapper[28149]: I0313 13:10:55.313498 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "736c6577-449b-4b8d-8bfa-3dfbcc259e94" (UID: "736c6577-449b-4b8d-8bfa-3dfbcc259e94"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:55.332045 master-0 kubenswrapper[28149]: I0313 13:10:55.331936 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-scripts" (OuterVolumeSpecName: "scripts") pod "736c6577-449b-4b8d-8bfa-3dfbcc259e94" (UID: "736c6577-449b-4b8d-8bfa-3dfbcc259e94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:55.379477 master-0 kubenswrapper[28149]: I0313 13:10:55.379409 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-config-data" (OuterVolumeSpecName: "config-data") pod "736c6577-449b-4b8d-8bfa-3dfbcc259e94" (UID: "736c6577-449b-4b8d-8bfa-3dfbcc259e94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:55.388422 master-0 kubenswrapper[28149]: I0313 13:10:55.388356 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "736c6577-449b-4b8d-8bfa-3dfbcc259e94" (UID: "736c6577-449b-4b8d-8bfa-3dfbcc259e94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:55.414328 master-0 kubenswrapper[28149]: I0313 13:10:55.412422 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:55.414328 master-0 kubenswrapper[28149]: I0313 13:10:55.412486 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:55.414328 master-0 kubenswrapper[28149]: I0313 13:10:55.412497 28149 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:55.414328 master-0 kubenswrapper[28149]: I0313 13:10:55.412507 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lph7g\" (UniqueName: \"kubernetes.io/projected/736c6577-449b-4b8d-8bfa-3dfbcc259e94-kube-api-access-lph7g\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:55.414328 master-0 kubenswrapper[28149]: I0313 13:10:55.412517 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:55.414328 master-0 kubenswrapper[28149]: I0313 13:10:55.412526 28149 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/736c6577-449b-4b8d-8bfa-3dfbcc259e94-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:55.558642 master-0 kubenswrapper[28149]: I0313 13:10:55.558209 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:10:55.560994 master-0 kubenswrapper[28149]: W0313 13:10:55.560923 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod862ca5ea_1489_4696_a698_ab9992caaa78.slice/crio-f64b0f5723d8e88590ac7fa396a5fbe6352548af07c8a54b67fe94e39f3bccd8 WatchSource:0}: Error finding container f64b0f5723d8e88590ac7fa396a5fbe6352548af07c8a54b67fe94e39f3bccd8: Status 404 returned error can't find the container with id f64b0f5723d8e88590ac7fa396a5fbe6352548af07c8a54b67fe94e39f3bccd8 Mar 13 13:10:55.650879 master-0 kubenswrapper[28149]: I0313 13:10:55.650823 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-h8h9t"] Mar 13 13:10:55.745271 master-0 kubenswrapper[28149]: W0313 13:10:55.745209 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfc459b1_604b_46d5_be0f_792e3a66a7b1.slice/crio-51fb7735286947c34c168eccf9c34890554a866d7219f2f2b73a669d95246e6b WatchSource:0}: Error finding container 51fb7735286947c34c168eccf9c34890554a866d7219f2f2b73a669d95246e6b: Status 404 returned error can't find the container with id 51fb7735286947c34c168eccf9c34890554a866d7219f2f2b73a669d95246e6b Mar 13 13:10:55.745617 master-0 kubenswrapper[28149]: I0313 13:10:55.745558 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:10:55.890011 master-0 kubenswrapper[28149]: I0313 13:10:55.888051 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jpjd2" event={"ID":"736c6577-449b-4b8d-8bfa-3dfbcc259e94","Type":"ContainerDied","Data":"802a4eb8fd604946a8eb48f1fd0e2593b37275e5f3465a341033c567e666f3ef"} Mar 13 13:10:55.890011 master-0 kubenswrapper[28149]: I0313 13:10:55.888125 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="802a4eb8fd604946a8eb48f1fd0e2593b37275e5f3465a341033c567e666f3ef" Mar 13 13:10:55.890011 master-0 kubenswrapper[28149]: I0313 13:10:55.888233 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jpjd2" Mar 13 13:10:55.891174 master-0 kubenswrapper[28149]: I0313 13:10:55.891085 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-internal-api-0" event={"ID":"862ca5ea-1489-4696-a698-ab9992caaa78","Type":"ContainerStarted","Data":"f64b0f5723d8e88590ac7fa396a5fbe6352548af07c8a54b67fe94e39f3bccd8"} Mar 13 13:10:55.894286 master-0 kubenswrapper[28149]: I0313 13:10:55.893657 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wtgql" event={"ID":"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3","Type":"ContainerStarted","Data":"50d6bb62905d2d72748b2a1de7cb2f9566378cd11cb9c196e39eda57c5bd6748"} Mar 13 13:10:55.895546 master-0 kubenswrapper[28149]: I0313 13:10:55.895472 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-h8h9t" event={"ID":"0b7e43c1-e19e-4691-a5b4-2a2197764944","Type":"ContainerStarted","Data":"1dc772d7e244f33638fc3fe5355cf46fcf91c6152e45c7da565bf2a54c7c335e"} Mar 13 13:10:55.897375 master-0 kubenswrapper[28149]: I0313 13:10:55.897339 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-external-api-0" event={"ID":"cfc459b1-604b-46d5-be0f-792e3a66a7b1","Type":"ContainerStarted","Data":"51fb7735286947c34c168eccf9c34890554a866d7219f2f2b73a669d95246e6b"} Mar 13 13:10:56.265396 master-0 kubenswrapper[28149]: I0313 13:10:56.265305 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-wtgql" podStartSLOduration=4.442416533 podStartE2EDuration="13.265235655s" podCreationTimestamp="2026-03-13 13:10:43 +0000 UTC" firstStartedPulling="2026-03-13 13:10:46.22980078 +0000 UTC m=+1019.883265939" lastFinishedPulling="2026-03-13 13:10:55.052619902 +0000 UTC m=+1028.706085061" observedRunningTime="2026-03-13 13:10:55.927465716 +0000 UTC m=+1029.580930875" watchObservedRunningTime="2026-03-13 13:10:56.265235655 +0000 UTC m=+1029.918700814" Mar 13 13:10:56.364915 master-0 kubenswrapper[28149]: I0313 13:10:56.364825 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jpjd2"] Mar 13 13:10:56.377186 master-0 kubenswrapper[28149]: I0313 13:10:56.377109 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jpjd2"] Mar 13 13:10:56.417202 master-0 kubenswrapper[28149]: I0313 13:10:56.416434 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:10:56.421523 master-0 kubenswrapper[28149]: I0313 13:10:56.421404 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:10:56.478661 master-0 kubenswrapper[28149]: I0313 13:10:56.478601 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-smh2q"] Mar 13 13:10:56.479726 master-0 kubenswrapper[28149]: E0313 13:10:56.479702 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736c6577-449b-4b8d-8bfa-3dfbcc259e94" containerName="keystone-bootstrap" Mar 13 13:10:56.479823 master-0 kubenswrapper[28149]: I0313 13:10:56.479811 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="736c6577-449b-4b8d-8bfa-3dfbcc259e94" containerName="keystone-bootstrap" Mar 13 13:10:56.480204 master-0 kubenswrapper[28149]: I0313 13:10:56.480184 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="736c6577-449b-4b8d-8bfa-3dfbcc259e94" containerName="keystone-bootstrap" Mar 13 13:10:56.481579 master-0 kubenswrapper[28149]: I0313 13:10:56.481558 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.509582 master-0 kubenswrapper[28149]: I0313 13:10:56.509535 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 13:10:56.510112 master-0 kubenswrapper[28149]: I0313 13:10:56.510081 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 13:10:56.511987 master-0 kubenswrapper[28149]: I0313 13:10:56.511971 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 13:10:56.556716 master-0 kubenswrapper[28149]: I0313 13:10:56.554648 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-config-data\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.556716 master-0 kubenswrapper[28149]: I0313 13:10:56.554726 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-scripts\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.556716 master-0 kubenswrapper[28149]: I0313 13:10:56.554782 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-fernet-keys\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.556716 master-0 kubenswrapper[28149]: I0313 13:10:56.554802 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-combined-ca-bundle\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.556716 master-0 kubenswrapper[28149]: I0313 13:10:56.554828 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmrt4\" (UniqueName: \"kubernetes.io/projected/1574918a-8865-4cd3-89c5-a2e9855c8e23-kube-api-access-mmrt4\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.556716 master-0 kubenswrapper[28149]: I0313 13:10:56.554900 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-credential-keys\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.627259 master-0 kubenswrapper[28149]: I0313 13:10:56.620531 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-smh2q"] Mar 13 13:10:56.637708 master-0 kubenswrapper[28149]: I0313 13:10:56.637338 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-xtxz5"] Mar 13 13:10:56.637913 master-0 kubenswrapper[28149]: I0313 13:10:56.637709 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" podUID="51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6" containerName="dnsmasq-dns" containerID="cri-o://ca1e8d8a73405caa71e6289f4a9087d7c82d522efa8916691873a3333d2d6dde" gracePeriod=10 Mar 13 13:10:56.666941 master-0 kubenswrapper[28149]: I0313 13:10:56.657425 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-config-data\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.666941 master-0 kubenswrapper[28149]: I0313 13:10:56.657484 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-scripts\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.666941 master-0 kubenswrapper[28149]: I0313 13:10:56.657535 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-fernet-keys\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.666941 master-0 kubenswrapper[28149]: I0313 13:10:56.657558 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-combined-ca-bundle\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.666941 master-0 kubenswrapper[28149]: I0313 13:10:56.657583 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmrt4\" (UniqueName: \"kubernetes.io/projected/1574918a-8865-4cd3-89c5-a2e9855c8e23-kube-api-access-mmrt4\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.666941 master-0 kubenswrapper[28149]: I0313 13:10:56.657642 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-credential-keys\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.672054 master-0 kubenswrapper[28149]: I0313 13:10:56.668961 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:10:56.672571 master-0 kubenswrapper[28149]: I0313 13:10:56.672434 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-config-data\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.676969 master-0 kubenswrapper[28149]: I0313 13:10:56.675935 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-scripts\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.684550 master-0 kubenswrapper[28149]: I0313 13:10:56.681189 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-combined-ca-bundle\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.689041 master-0 kubenswrapper[28149]: I0313 13:10:56.688985 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-fernet-keys\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.702639 master-0 kubenswrapper[28149]: I0313 13:10:56.697921 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmrt4\" (UniqueName: \"kubernetes.io/projected/1574918a-8865-4cd3-89c5-a2e9855c8e23-kube-api-access-mmrt4\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.784183 master-0 kubenswrapper[28149]: I0313 13:10:56.778116 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-credential-keys\") pod \"keystone-bootstrap-smh2q\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.803121 master-0 kubenswrapper[28149]: I0313 13:10:56.803055 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c6577-449b-4b8d-8bfa-3dfbcc259e94" path="/var/lib/kubelet/pods/736c6577-449b-4b8d-8bfa-3dfbcc259e94/volumes" Mar 13 13:10:56.871283 master-0 kubenswrapper[28149]: I0313 13:10:56.871214 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:10:56.975677 master-0 kubenswrapper[28149]: I0313 13:10:56.975597 28149 generic.go:334] "Generic (PLEG): container finished" podID="51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6" containerID="ca1e8d8a73405caa71e6289f4a9087d7c82d522efa8916691873a3333d2d6dde" exitCode=0 Mar 13 13:10:56.975677 master-0 kubenswrapper[28149]: I0313 13:10:56.975668 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" event={"ID":"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6","Type":"ContainerDied","Data":"ca1e8d8a73405caa71e6289f4a9087d7c82d522efa8916691873a3333d2d6dde"} Mar 13 13:10:56.978302 master-0 kubenswrapper[28149]: I0313 13:10:56.978271 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-internal-api-0" event={"ID":"862ca5ea-1489-4696-a698-ab9992caaa78","Type":"ContainerStarted","Data":"edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac"} Mar 13 13:10:56.980817 master-0 kubenswrapper[28149]: I0313 13:10:56.980778 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-external-api-0" event={"ID":"cfc459b1-604b-46d5-be0f-792e3a66a7b1","Type":"ContainerStarted","Data":"4afcd4d7e11775e451eba815b9d0a93141819eb87818991b5f59410f42e00c3f"} Mar 13 13:10:57.490609 master-0 kubenswrapper[28149]: I0313 13:10:57.489336 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:10:57.625259 master-0 kubenswrapper[28149]: I0313 13:10:57.623483 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-smh2q"] Mar 13 13:10:57.673327 master-0 kubenswrapper[28149]: I0313 13:10:57.673051 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-dns-svc\") pod \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " Mar 13 13:10:57.673596 master-0 kubenswrapper[28149]: I0313 13:10:57.673368 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-ovsdbserver-sb\") pod \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " Mar 13 13:10:57.673596 master-0 kubenswrapper[28149]: I0313 13:10:57.673429 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv77j\" (UniqueName: \"kubernetes.io/projected/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-kube-api-access-nv77j\") pod \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " Mar 13 13:10:57.673596 master-0 kubenswrapper[28149]: I0313 13:10:57.673480 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-config\") pod \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " Mar 13 13:10:57.673596 master-0 kubenswrapper[28149]: I0313 13:10:57.673505 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-ovsdbserver-nb\") pod \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\" (UID: \"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6\") " Mar 13 13:10:57.683184 master-0 kubenswrapper[28149]: I0313 13:10:57.679639 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-kube-api-access-nv77j" (OuterVolumeSpecName: "kube-api-access-nv77j") pod "51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6" (UID: "51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6"). InnerVolumeSpecName "kube-api-access-nv77j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:57.744742 master-0 kubenswrapper[28149]: I0313 13:10:57.744622 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-config" (OuterVolumeSpecName: "config") pod "51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6" (UID: "51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:57.763542 master-0 kubenswrapper[28149]: I0313 13:10:57.763458 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6" (UID: "51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:57.766918 master-0 kubenswrapper[28149]: I0313 13:10:57.764572 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6" (UID: "51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:57.776473 master-0 kubenswrapper[28149]: I0313 13:10:57.776409 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6" (UID: "51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:10:57.786529 master-0 kubenswrapper[28149]: I0313 13:10:57.786332 28149 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:57.786529 master-0 kubenswrapper[28149]: I0313 13:10:57.786431 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:57.786529 master-0 kubenswrapper[28149]: I0313 13:10:57.786454 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv77j\" (UniqueName: \"kubernetes.io/projected/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-kube-api-access-nv77j\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:57.786529 master-0 kubenswrapper[28149]: I0313 13:10:57.786470 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:57.786529 master-0 kubenswrapper[28149]: I0313 13:10:57.786487 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:58.008032 master-0 kubenswrapper[28149]: I0313 13:10:58.007901 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" event={"ID":"51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6","Type":"ContainerDied","Data":"3e9812eaeeff642a692615495d6058bf4bbc4b86eaad967aa5872454af7101ad"} Mar 13 13:10:58.008391 master-0 kubenswrapper[28149]: I0313 13:10:58.008339 28149 scope.go:117] "RemoveContainer" containerID="ca1e8d8a73405caa71e6289f4a9087d7c82d522efa8916691873a3333d2d6dde" Mar 13 13:10:58.008734 master-0 kubenswrapper[28149]: I0313 13:10:58.008684 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-xtxz5" Mar 13 13:10:58.053376 master-0 kubenswrapper[28149]: I0313 13:10:58.053321 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-internal-api-0" event={"ID":"862ca5ea-1489-4696-a698-ab9992caaa78","Type":"ContainerStarted","Data":"bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5"} Mar 13 13:10:58.053650 master-0 kubenswrapper[28149]: I0313 13:10:58.053444 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-e6fbd-default-internal-api-0" podUID="862ca5ea-1489-4696-a698-ab9992caaa78" containerName="glance-log" containerID="cri-o://edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac" gracePeriod=30 Mar 13 13:10:58.053650 master-0 kubenswrapper[28149]: I0313 13:10:58.053573 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-e6fbd-default-internal-api-0" podUID="862ca5ea-1489-4696-a698-ab9992caaa78" containerName="glance-httpd" containerID="cri-o://bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5" gracePeriod=30 Mar 13 13:10:58.058472 master-0 kubenswrapper[28149]: I0313 13:10:58.057517 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-smh2q" event={"ID":"1574918a-8865-4cd3-89c5-a2e9855c8e23","Type":"ContainerStarted","Data":"4b47fa2b0cb18f2fe096c7ee9a617f03c1eec5f1d6a8e2850bf50b91498a4602"} Mar 13 13:10:58.058472 master-0 kubenswrapper[28149]: I0313 13:10:58.057557 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-smh2q" event={"ID":"1574918a-8865-4cd3-89c5-a2e9855c8e23","Type":"ContainerStarted","Data":"cafe5c7e58c3c7d74833e732e986d755d97f187d8350a30b4b6f25aa95cde146"} Mar 13 13:10:58.064663 master-0 kubenswrapper[28149]: I0313 13:10:58.064299 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-external-api-0" event={"ID":"cfc459b1-604b-46d5-be0f-792e3a66a7b1","Type":"ContainerStarted","Data":"ae23445f6ac8cb92903d34a401e1012fde32e867514ef39f42e7ddcc892a0a9f"} Mar 13 13:10:58.064663 master-0 kubenswrapper[28149]: I0313 13:10:58.064497 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-e6fbd-default-external-api-0" podUID="cfc459b1-604b-46d5-be0f-792e3a66a7b1" containerName="glance-log" containerID="cri-o://4afcd4d7e11775e451eba815b9d0a93141819eb87818991b5f59410f42e00c3f" gracePeriod=30 Mar 13 13:10:58.064663 master-0 kubenswrapper[28149]: I0313 13:10:58.064622 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-e6fbd-default-external-api-0" podUID="cfc459b1-604b-46d5-be0f-792e3a66a7b1" containerName="glance-httpd" containerID="cri-o://ae23445f6ac8cb92903d34a401e1012fde32e867514ef39f42e7ddcc892a0a9f" gracePeriod=30 Mar 13 13:10:58.119533 master-0 kubenswrapper[28149]: I0313 13:10:58.119475 28149 scope.go:117] "RemoveContainer" containerID="a6ab5ac5fdb36195691e3ca7c0ab96dc11ffd0f231c79551a11396dd516e0620" Mar 13 13:10:58.204965 master-0 kubenswrapper[28149]: I0313 13:10:58.204902 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-xtxz5"] Mar 13 13:10:58.303158 master-0 kubenswrapper[28149]: I0313 13:10:58.303037 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-xtxz5"] Mar 13 13:10:58.351248 master-0 kubenswrapper[28149]: I0313 13:10:58.351000 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e6fbd-default-internal-api-0" podStartSLOduration=7.350932365 podStartE2EDuration="7.350932365s" podCreationTimestamp="2026-03-13 13:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:10:58.336341782 +0000 UTC m=+1031.989806941" watchObservedRunningTime="2026-03-13 13:10:58.350932365 +0000 UTC m=+1032.004397524" Mar 13 13:10:58.360530 master-0 kubenswrapper[28149]: E0313 13:10:58.360473 28149 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfc459b1_604b_46d5_be0f_792e3a66a7b1.slice/crio-4afcd4d7e11775e451eba815b9d0a93141819eb87818991b5f59410f42e00c3f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod862ca5ea_1489_4696_a698_ab9992caaa78.slice/crio-edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod862ca5ea_1489_4696_a698_ab9992caaa78.slice/crio-conmon-edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac.scope\": RecentStats: unable to find data in memory cache]" Mar 13 13:10:58.400074 master-0 kubenswrapper[28149]: I0313 13:10:58.399975 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e6fbd-default-external-api-0" podStartSLOduration=7.399949135 podStartE2EDuration="7.399949135s" podCreationTimestamp="2026-03-13 13:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:10:58.366663898 +0000 UTC m=+1032.020129057" watchObservedRunningTime="2026-03-13 13:10:58.399949135 +0000 UTC m=+1032.053414304" Mar 13 13:10:58.446371 master-0 kubenswrapper[28149]: I0313 13:10:58.446241 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-smh2q" podStartSLOduration=2.446200871 podStartE2EDuration="2.446200871s" podCreationTimestamp="2026-03-13 13:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:10:58.393474601 +0000 UTC m=+1032.046939760" watchObservedRunningTime="2026-03-13 13:10:58.446200871 +0000 UTC m=+1032.099666030" Mar 13 13:10:58.718822 master-0 kubenswrapper[28149]: I0313 13:10:58.718660 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6" path="/var/lib/kubelet/pods/51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6/volumes" Mar 13 13:10:58.887021 master-0 kubenswrapper[28149]: I0313 13:10:58.886969 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.053308 master-0 kubenswrapper[28149]: I0313 13:10:59.047844 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-scripts\") pod \"862ca5ea-1489-4696-a698-ab9992caaa78\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " Mar 13 13:10:59.053308 master-0 kubenswrapper[28149]: I0313 13:10:59.048038 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862ca5ea-1489-4696-a698-ab9992caaa78-logs\") pod \"862ca5ea-1489-4696-a698-ab9992caaa78\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " Mar 13 13:10:59.053308 master-0 kubenswrapper[28149]: I0313 13:10:59.048186 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-combined-ca-bundle\") pod \"862ca5ea-1489-4696-a698-ab9992caaa78\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " Mar 13 13:10:59.053308 master-0 kubenswrapper[28149]: I0313 13:10:59.048475 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"862ca5ea-1489-4696-a698-ab9992caaa78\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " Mar 13 13:10:59.053308 master-0 kubenswrapper[28149]: I0313 13:10:59.048583 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4zxh\" (UniqueName: \"kubernetes.io/projected/862ca5ea-1489-4696-a698-ab9992caaa78-kube-api-access-s4zxh\") pod \"862ca5ea-1489-4696-a698-ab9992caaa78\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " Mar 13 13:10:59.053308 master-0 kubenswrapper[28149]: I0313 13:10:59.048686 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-config-data\") pod \"862ca5ea-1489-4696-a698-ab9992caaa78\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " Mar 13 13:10:59.053308 master-0 kubenswrapper[28149]: I0313 13:10:59.048818 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/862ca5ea-1489-4696-a698-ab9992caaa78-httpd-run\") pod \"862ca5ea-1489-4696-a698-ab9992caaa78\" (UID: \"862ca5ea-1489-4696-a698-ab9992caaa78\") " Mar 13 13:10:59.053308 master-0 kubenswrapper[28149]: I0313 13:10:59.052036 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/862ca5ea-1489-4696-a698-ab9992caaa78-logs" (OuterVolumeSpecName: "logs") pod "862ca5ea-1489-4696-a698-ab9992caaa78" (UID: "862ca5ea-1489-4696-a698-ab9992caaa78"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:10:59.056186 master-0 kubenswrapper[28149]: I0313 13:10:59.053966 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-scripts" (OuterVolumeSpecName: "scripts") pod "862ca5ea-1489-4696-a698-ab9992caaa78" (UID: "862ca5ea-1489-4696-a698-ab9992caaa78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:59.063444 master-0 kubenswrapper[28149]: I0313 13:10:59.063358 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/862ca5ea-1489-4696-a698-ab9992caaa78-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "862ca5ea-1489-4696-a698-ab9992caaa78" (UID: "862ca5ea-1489-4696-a698-ab9992caaa78"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:10:59.080418 master-0 kubenswrapper[28149]: I0313 13:10:59.080354 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862ca5ea-1489-4696-a698-ab9992caaa78-kube-api-access-s4zxh" (OuterVolumeSpecName: "kube-api-access-s4zxh") pod "862ca5ea-1489-4696-a698-ab9992caaa78" (UID: "862ca5ea-1489-4696-a698-ab9992caaa78"). InnerVolumeSpecName "kube-api-access-s4zxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:59.084308 master-0 kubenswrapper[28149]: I0313 13:10:59.084206 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401" (OuterVolumeSpecName: "glance") pod "862ca5ea-1489-4696-a698-ab9992caaa78" (UID: "862ca5ea-1489-4696-a698-ab9992caaa78"). InnerVolumeSpecName "pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 13:10:59.098425 master-0 kubenswrapper[28149]: I0313 13:10:59.098363 28149 generic.go:334] "Generic (PLEG): container finished" podID="862ca5ea-1489-4696-a698-ab9992caaa78" containerID="bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5" exitCode=0 Mar 13 13:10:59.098425 master-0 kubenswrapper[28149]: I0313 13:10:59.098411 28149 generic.go:334] "Generic (PLEG): container finished" podID="862ca5ea-1489-4696-a698-ab9992caaa78" containerID="edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac" exitCode=143 Mar 13 13:10:59.098933 master-0 kubenswrapper[28149]: I0313 13:10:59.098490 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-internal-api-0" event={"ID":"862ca5ea-1489-4696-a698-ab9992caaa78","Type":"ContainerDied","Data":"bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5"} Mar 13 13:10:59.098933 master-0 kubenswrapper[28149]: I0313 13:10:59.098644 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-internal-api-0" event={"ID":"862ca5ea-1489-4696-a698-ab9992caaa78","Type":"ContainerDied","Data":"edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac"} Mar 13 13:10:59.098933 master-0 kubenswrapper[28149]: I0313 13:10:59.098734 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-internal-api-0" event={"ID":"862ca5ea-1489-4696-a698-ab9992caaa78","Type":"ContainerDied","Data":"f64b0f5723d8e88590ac7fa396a5fbe6352548af07c8a54b67fe94e39f3bccd8"} Mar 13 13:10:59.098933 master-0 kubenswrapper[28149]: I0313 13:10:59.098792 28149 scope.go:117] "RemoveContainer" containerID="bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5" Mar 13 13:10:59.099748 master-0 kubenswrapper[28149]: I0313 13:10:59.099284 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.105186 master-0 kubenswrapper[28149]: I0313 13:10:59.105111 28149 generic.go:334] "Generic (PLEG): container finished" podID="cfc459b1-604b-46d5-be0f-792e3a66a7b1" containerID="ae23445f6ac8cb92903d34a401e1012fde32e867514ef39f42e7ddcc892a0a9f" exitCode=0 Mar 13 13:10:59.105186 master-0 kubenswrapper[28149]: I0313 13:10:59.105176 28149 generic.go:334] "Generic (PLEG): container finished" podID="cfc459b1-604b-46d5-be0f-792e3a66a7b1" containerID="4afcd4d7e11775e451eba815b9d0a93141819eb87818991b5f59410f42e00c3f" exitCode=143 Mar 13 13:10:59.105989 master-0 kubenswrapper[28149]: I0313 13:10:59.105378 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-external-api-0" event={"ID":"cfc459b1-604b-46d5-be0f-792e3a66a7b1","Type":"ContainerDied","Data":"ae23445f6ac8cb92903d34a401e1012fde32e867514ef39f42e7ddcc892a0a9f"} Mar 13 13:10:59.105989 master-0 kubenswrapper[28149]: I0313 13:10:59.105870 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-external-api-0" event={"ID":"cfc459b1-604b-46d5-be0f-792e3a66a7b1","Type":"ContainerDied","Data":"4afcd4d7e11775e451eba815b9d0a93141819eb87818991b5f59410f42e00c3f"} Mar 13 13:10:59.105989 master-0 kubenswrapper[28149]: I0313 13:10:59.105921 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-external-api-0" event={"ID":"cfc459b1-604b-46d5-be0f-792e3a66a7b1","Type":"ContainerDied","Data":"51fb7735286947c34c168eccf9c34890554a866d7219f2f2b73a669d95246e6b"} Mar 13 13:10:59.105989 master-0 kubenswrapper[28149]: I0313 13:10:59.105935 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51fb7735286947c34c168eccf9c34890554a866d7219f2f2b73a669d95246e6b" Mar 13 13:10:59.128892 master-0 kubenswrapper[28149]: I0313 13:10:59.128749 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:10:59.137540 master-0 kubenswrapper[28149]: I0313 13:10:59.134560 28149 scope.go:117] "RemoveContainer" containerID="edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac" Mar 13 13:10:59.160196 master-0 kubenswrapper[28149]: I0313 13:10:59.153095 28149 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/862ca5ea-1489-4696-a698-ab9992caaa78-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:59.160196 master-0 kubenswrapper[28149]: I0313 13:10:59.153125 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:59.160196 master-0 kubenswrapper[28149]: I0313 13:10:59.153153 28149 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862ca5ea-1489-4696-a698-ab9992caaa78-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:59.160196 master-0 kubenswrapper[28149]: I0313 13:10:59.153178 28149 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") on node \"master-0\" " Mar 13 13:10:59.160196 master-0 kubenswrapper[28149]: I0313 13:10:59.153191 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4zxh\" (UniqueName: \"kubernetes.io/projected/862ca5ea-1489-4696-a698-ab9992caaa78-kube-api-access-s4zxh\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:59.173898 master-0 kubenswrapper[28149]: I0313 13:10:59.173572 28149 scope.go:117] "RemoveContainer" containerID="bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5" Mar 13 13:10:59.173898 master-0 kubenswrapper[28149]: I0313 13:10:59.173809 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "862ca5ea-1489-4696-a698-ab9992caaa78" (UID: "862ca5ea-1489-4696-a698-ab9992caaa78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:59.179270 master-0 kubenswrapper[28149]: E0313 13:10:59.179204 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5\": container with ID starting with bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5 not found: ID does not exist" containerID="bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5" Mar 13 13:10:59.179477 master-0 kubenswrapper[28149]: I0313 13:10:59.179274 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5"} err="failed to get container status \"bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5\": rpc error: code = NotFound desc = could not find container \"bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5\": container with ID starting with bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5 not found: ID does not exist" Mar 13 13:10:59.179477 master-0 kubenswrapper[28149]: I0313 13:10:59.179320 28149 scope.go:117] "RemoveContainer" containerID="edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac" Mar 13 13:10:59.180634 master-0 kubenswrapper[28149]: E0313 13:10:59.180577 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac\": container with ID starting with edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac not found: ID does not exist" containerID="edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac" Mar 13 13:10:59.180634 master-0 kubenswrapper[28149]: I0313 13:10:59.180615 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac"} err="failed to get container status \"edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac\": rpc error: code = NotFound desc = could not find container \"edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac\": container with ID starting with edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac not found: ID does not exist" Mar 13 13:10:59.180776 master-0 kubenswrapper[28149]: I0313 13:10:59.180639 28149 scope.go:117] "RemoveContainer" containerID="bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5" Mar 13 13:10:59.181453 master-0 kubenswrapper[28149]: I0313 13:10:59.181420 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5"} err="failed to get container status \"bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5\": rpc error: code = NotFound desc = could not find container \"bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5\": container with ID starting with bdab06293a81b637d4a1e71d4f52bc5a7f2b2384aa7fe0901ab01725e9ebc5a5 not found: ID does not exist" Mar 13 13:10:59.181453 master-0 kubenswrapper[28149]: I0313 13:10:59.181450 28149 scope.go:117] "RemoveContainer" containerID="edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac" Mar 13 13:10:59.181925 master-0 kubenswrapper[28149]: I0313 13:10:59.181833 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac"} err="failed to get container status \"edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac\": rpc error: code = NotFound desc = could not find container \"edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac\": container with ID starting with edd9974038af20f79c66d1bb70b2e118d1252e39d71e3d1396ace94aced3d2ac not found: ID does not exist" Mar 13 13:10:59.194178 master-0 kubenswrapper[28149]: I0313 13:10:59.194115 28149 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 13 13:10:59.194412 master-0 kubenswrapper[28149]: I0313 13:10:59.194385 28149 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3" (UniqueName: "kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401") on node "master-0" Mar 13 13:10:59.207627 master-0 kubenswrapper[28149]: I0313 13:10:59.207472 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-config-data" (OuterVolumeSpecName: "config-data") pod "862ca5ea-1489-4696-a698-ab9992caaa78" (UID: "862ca5ea-1489-4696-a698-ab9992caaa78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:59.255602 master-0 kubenswrapper[28149]: I0313 13:10:59.254373 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-config-data\") pod \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " Mar 13 13:10:59.255602 master-0 kubenswrapper[28149]: I0313 13:10:59.254442 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-combined-ca-bundle\") pod \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " Mar 13 13:10:59.255602 master-0 kubenswrapper[28149]: I0313 13:10:59.254487 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc459b1-604b-46d5-be0f-792e3a66a7b1-httpd-run\") pod \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " Mar 13 13:10:59.255602 master-0 kubenswrapper[28149]: I0313 13:10:59.254718 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " Mar 13 13:10:59.255602 master-0 kubenswrapper[28149]: I0313 13:10:59.254834 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc459b1-604b-46d5-be0f-792e3a66a7b1-logs\") pod \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " Mar 13 13:10:59.255602 master-0 kubenswrapper[28149]: I0313 13:10:59.254872 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n88hf\" (UniqueName: \"kubernetes.io/projected/cfc459b1-604b-46d5-be0f-792e3a66a7b1-kube-api-access-n88hf\") pod \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " Mar 13 13:10:59.255602 master-0 kubenswrapper[28149]: I0313 13:10:59.254992 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-scripts\") pod \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\" (UID: \"cfc459b1-604b-46d5-be0f-792e3a66a7b1\") " Mar 13 13:10:59.255602 master-0 kubenswrapper[28149]: I0313 13:10:59.255250 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc459b1-604b-46d5-be0f-792e3a66a7b1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cfc459b1-604b-46d5-be0f-792e3a66a7b1" (UID: "cfc459b1-604b-46d5-be0f-792e3a66a7b1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:10:59.255602 master-0 kubenswrapper[28149]: I0313 13:10:59.255511 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc459b1-604b-46d5-be0f-792e3a66a7b1-logs" (OuterVolumeSpecName: "logs") pod "cfc459b1-604b-46d5-be0f-792e3a66a7b1" (UID: "cfc459b1-604b-46d5-be0f-792e3a66a7b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:10:59.257594 master-0 kubenswrapper[28149]: I0313 13:10:59.257451 28149 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc459b1-604b-46d5-be0f-792e3a66a7b1-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:59.257594 master-0 kubenswrapper[28149]: I0313 13:10:59.257482 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:59.257594 master-0 kubenswrapper[28149]: I0313 13:10:59.257499 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862ca5ea-1489-4696-a698-ab9992caaa78-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:59.257594 master-0 kubenswrapper[28149]: I0313 13:10:59.257512 28149 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc459b1-604b-46d5-be0f-792e3a66a7b1-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:59.257594 master-0 kubenswrapper[28149]: I0313 13:10:59.257527 28149 reconciler_common.go:293] "Volume detached for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:59.260431 master-0 kubenswrapper[28149]: I0313 13:10:59.260170 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-scripts" (OuterVolumeSpecName: "scripts") pod "cfc459b1-604b-46d5-be0f-792e3a66a7b1" (UID: "cfc459b1-604b-46d5-be0f-792e3a66a7b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:59.262341 master-0 kubenswrapper[28149]: I0313 13:10:59.261877 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc459b1-604b-46d5-be0f-792e3a66a7b1-kube-api-access-n88hf" (OuterVolumeSpecName: "kube-api-access-n88hf") pod "cfc459b1-604b-46d5-be0f-792e3a66a7b1" (UID: "cfc459b1-604b-46d5-be0f-792e3a66a7b1"). InnerVolumeSpecName "kube-api-access-n88hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:10:59.287608 master-0 kubenswrapper[28149]: I0313 13:10:59.287544 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798" (OuterVolumeSpecName: "glance") pod "cfc459b1-604b-46d5-be0f-792e3a66a7b1" (UID: "cfc459b1-604b-46d5-be0f-792e3a66a7b1"). InnerVolumeSpecName "pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 13:10:59.290362 master-0 kubenswrapper[28149]: I0313 13:10:59.290212 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfc459b1-604b-46d5-be0f-792e3a66a7b1" (UID: "cfc459b1-604b-46d5-be0f-792e3a66a7b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:59.325718 master-0 kubenswrapper[28149]: I0313 13:10:59.325643 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-config-data" (OuterVolumeSpecName: "config-data") pod "cfc459b1-604b-46d5-be0f-792e3a66a7b1" (UID: "cfc459b1-604b-46d5-be0f-792e3a66a7b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:10:59.359634 master-0 kubenswrapper[28149]: I0313 13:10:59.359567 28149 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") on node \"master-0\" " Mar 13 13:10:59.359634 master-0 kubenswrapper[28149]: I0313 13:10:59.359616 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n88hf\" (UniqueName: \"kubernetes.io/projected/cfc459b1-604b-46d5-be0f-792e3a66a7b1-kube-api-access-n88hf\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:59.359634 master-0 kubenswrapper[28149]: I0313 13:10:59.359631 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:59.359634 master-0 kubenswrapper[28149]: I0313 13:10:59.359642 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:59.359634 master-0 kubenswrapper[28149]: I0313 13:10:59.359651 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc459b1-604b-46d5-be0f-792e3a66a7b1-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:59.387300 master-0 kubenswrapper[28149]: I0313 13:10:59.386647 28149 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 13 13:10:59.387300 master-0 kubenswrapper[28149]: I0313 13:10:59.386834 28149 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645" (UniqueName: "kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798") on node "master-0" Mar 13 13:10:59.465083 master-0 kubenswrapper[28149]: I0313 13:10:59.461673 28149 reconciler_common.go:293] "Volume detached for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") on node \"master-0\" DevicePath \"\"" Mar 13 13:10:59.465083 master-0 kubenswrapper[28149]: I0313 13:10:59.464941 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:10:59.506122 master-0 kubenswrapper[28149]: I0313 13:10:59.506035 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:10:59.520242 master-0 kubenswrapper[28149]: I0313 13:10:59.520101 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:10:59.520957 master-0 kubenswrapper[28149]: E0313 13:10:59.520716 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6" containerName="init" Mar 13 13:10:59.520957 master-0 kubenswrapper[28149]: I0313 13:10:59.520745 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6" containerName="init" Mar 13 13:10:59.520957 master-0 kubenswrapper[28149]: E0313 13:10:59.520791 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc459b1-604b-46d5-be0f-792e3a66a7b1" containerName="glance-httpd" Mar 13 13:10:59.520957 master-0 kubenswrapper[28149]: I0313 13:10:59.520803 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc459b1-604b-46d5-be0f-792e3a66a7b1" containerName="glance-httpd" Mar 13 13:10:59.520957 master-0 kubenswrapper[28149]: E0313 13:10:59.520820 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc459b1-604b-46d5-be0f-792e3a66a7b1" containerName="glance-log" Mar 13 13:10:59.520957 master-0 kubenswrapper[28149]: I0313 13:10:59.520830 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc459b1-604b-46d5-be0f-792e3a66a7b1" containerName="glance-log" Mar 13 13:10:59.520957 master-0 kubenswrapper[28149]: E0313 13:10:59.520868 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862ca5ea-1489-4696-a698-ab9992caaa78" containerName="glance-log" Mar 13 13:10:59.520957 master-0 kubenswrapper[28149]: I0313 13:10:59.520879 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="862ca5ea-1489-4696-a698-ab9992caaa78" containerName="glance-log" Mar 13 13:10:59.520957 master-0 kubenswrapper[28149]: E0313 13:10:59.520910 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6" containerName="dnsmasq-dns" Mar 13 13:10:59.520957 master-0 kubenswrapper[28149]: I0313 13:10:59.520918 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6" containerName="dnsmasq-dns" Mar 13 13:10:59.520957 master-0 kubenswrapper[28149]: E0313 13:10:59.520929 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862ca5ea-1489-4696-a698-ab9992caaa78" containerName="glance-httpd" Mar 13 13:10:59.520957 master-0 kubenswrapper[28149]: I0313 13:10:59.520936 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="862ca5ea-1489-4696-a698-ab9992caaa78" containerName="glance-httpd" Mar 13 13:10:59.521513 master-0 kubenswrapper[28149]: I0313 13:10:59.521223 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d60b57-3f2c-4aeb-83c9-e688bb0bb3c6" containerName="dnsmasq-dns" Mar 13 13:10:59.521513 master-0 kubenswrapper[28149]: I0313 13:10:59.521254 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc459b1-604b-46d5-be0f-792e3a66a7b1" containerName="glance-log" Mar 13 13:10:59.521513 master-0 kubenswrapper[28149]: I0313 13:10:59.521270 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc459b1-604b-46d5-be0f-792e3a66a7b1" containerName="glance-httpd" Mar 13 13:10:59.521513 master-0 kubenswrapper[28149]: I0313 13:10:59.521289 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="862ca5ea-1489-4696-a698-ab9992caaa78" containerName="glance-log" Mar 13 13:10:59.521513 master-0 kubenswrapper[28149]: I0313 13:10:59.521317 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="862ca5ea-1489-4696-a698-ab9992caaa78" containerName="glance-httpd" Mar 13 13:10:59.522772 master-0 kubenswrapper[28149]: I0313 13:10:59.522736 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.547433 master-0 kubenswrapper[28149]: I0313 13:10:59.530691 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 13 13:10:59.547433 master-0 kubenswrapper[28149]: I0313 13:10:59.530952 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-e6fbd-default-internal-config-data" Mar 13 13:10:59.547433 master-0 kubenswrapper[28149]: I0313 13:10:59.538423 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:10:59.667305 master-0 kubenswrapper[28149]: I0313 13:10:59.667093 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-config-data\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.667305 master-0 kubenswrapper[28149]: I0313 13:10:59.667224 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-combined-ca-bundle\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.667305 master-0 kubenswrapper[28149]: I0313 13:10:59.667270 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.668266 master-0 kubenswrapper[28149]: I0313 13:10:59.667722 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/611eba2b-39d1-43b8-bdce-7b7c5436180c-logs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.668266 master-0 kubenswrapper[28149]: I0313 13:10:59.667843 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-scripts\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.668266 master-0 kubenswrapper[28149]: I0313 13:10:59.667896 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/611eba2b-39d1-43b8-bdce-7b7c5436180c-httpd-run\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.668266 master-0 kubenswrapper[28149]: I0313 13:10:59.667934 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-internal-tls-certs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.668266 master-0 kubenswrapper[28149]: I0313 13:10:59.668028 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dgb6\" (UniqueName: \"kubernetes.io/projected/611eba2b-39d1-43b8-bdce-7b7c5436180c-kube-api-access-4dgb6\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.771509 master-0 kubenswrapper[28149]: I0313 13:10:59.770749 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-combined-ca-bundle\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.771509 master-0 kubenswrapper[28149]: I0313 13:10:59.771069 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.772123 master-0 kubenswrapper[28149]: I0313 13:10:59.771572 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/611eba2b-39d1-43b8-bdce-7b7c5436180c-logs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.772123 master-0 kubenswrapper[28149]: I0313 13:10:59.771712 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-scripts\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.772123 master-0 kubenswrapper[28149]: I0313 13:10:59.771760 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/611eba2b-39d1-43b8-bdce-7b7c5436180c-httpd-run\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.772123 master-0 kubenswrapper[28149]: I0313 13:10:59.771832 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-internal-tls-certs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.772123 master-0 kubenswrapper[28149]: I0313 13:10:59.772002 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dgb6\" (UniqueName: \"kubernetes.io/projected/611eba2b-39d1-43b8-bdce-7b7c5436180c-kube-api-access-4dgb6\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.772909 master-0 kubenswrapper[28149]: I0313 13:10:59.772651 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-config-data\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.772909 master-0 kubenswrapper[28149]: I0313 13:10:59.772815 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/611eba2b-39d1-43b8-bdce-7b7c5436180c-httpd-run\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.773647 master-0 kubenswrapper[28149]: I0313 13:10:59.773596 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/611eba2b-39d1-43b8-bdce-7b7c5436180c-logs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.777330 master-0 kubenswrapper[28149]: I0313 13:10:59.775337 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-combined-ca-bundle\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.777330 master-0 kubenswrapper[28149]: I0313 13:10:59.777270 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-internal-tls-certs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.778205 master-0 kubenswrapper[28149]: I0313 13:10:59.777704 28149 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 13:10:59.778205 master-0 kubenswrapper[28149]: I0313 13:10:59.777737 28149 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/820c05b4f9c429c0b1c354ead4f7cbf32abe82e5746431ea131598f6d233206f/globalmount\"" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.781296 master-0 kubenswrapper[28149]: I0313 13:10:59.781257 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-scripts\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.782681 master-0 kubenswrapper[28149]: I0313 13:10:59.782652 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-config-data\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:10:59.795421 master-0 kubenswrapper[28149]: I0313 13:10:59.795334 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dgb6\" (UniqueName: \"kubernetes.io/projected/611eba2b-39d1-43b8-bdce-7b7c5436180c-kube-api-access-4dgb6\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:11:00.125408 master-0 kubenswrapper[28149]: I0313 13:11:00.125288 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.169814 master-0 kubenswrapper[28149]: I0313 13:11:00.169727 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:11:00.188296 master-0 kubenswrapper[28149]: I0313 13:11:00.188182 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:11:00.218664 master-0 kubenswrapper[28149]: I0313 13:11:00.218595 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:11:00.221188 master-0 kubenswrapper[28149]: I0313 13:11:00.221129 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.224978 master-0 kubenswrapper[28149]: I0313 13:11:00.224936 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-e6fbd-default-external-config-data" Mar 13 13:11:00.225183 master-0 kubenswrapper[28149]: I0313 13:11:00.225108 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 13 13:11:00.256909 master-0 kubenswrapper[28149]: I0313 13:11:00.256845 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:11:00.406646 master-0 kubenswrapper[28149]: I0313 13:11:00.406584 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9e03bf1-b908-4148-8838-f54eaa369e6a-httpd-run\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.406646 master-0 kubenswrapper[28149]: I0313 13:11:00.406637 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-public-tls-certs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.406964 master-0 kubenswrapper[28149]: I0313 13:11:00.406728 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.406964 master-0 kubenswrapper[28149]: I0313 13:11:00.406766 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9e03bf1-b908-4148-8838-f54eaa369e6a-logs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.406964 master-0 kubenswrapper[28149]: I0313 13:11:00.406793 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-combined-ca-bundle\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.406964 master-0 kubenswrapper[28149]: I0313 13:11:00.406853 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb9ds\" (UniqueName: \"kubernetes.io/projected/f9e03bf1-b908-4148-8838-f54eaa369e6a-kube-api-access-tb9ds\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.407551 master-0 kubenswrapper[28149]: I0313 13:11:00.407282 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-config-data\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.407551 master-0 kubenswrapper[28149]: I0313 13:11:00.407373 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-scripts\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.509777 master-0 kubenswrapper[28149]: I0313 13:11:00.508910 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-config-data\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.509777 master-0 kubenswrapper[28149]: I0313 13:11:00.508963 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-scripts\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.509777 master-0 kubenswrapper[28149]: I0313 13:11:00.509018 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9e03bf1-b908-4148-8838-f54eaa369e6a-httpd-run\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.509777 master-0 kubenswrapper[28149]: I0313 13:11:00.509038 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-public-tls-certs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.509777 master-0 kubenswrapper[28149]: I0313 13:11:00.509085 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.509777 master-0 kubenswrapper[28149]: I0313 13:11:00.509323 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9e03bf1-b908-4148-8838-f54eaa369e6a-logs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.509777 master-0 kubenswrapper[28149]: I0313 13:11:00.509427 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-combined-ca-bundle\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.509777 master-0 kubenswrapper[28149]: I0313 13:11:00.509604 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb9ds\" (UniqueName: \"kubernetes.io/projected/f9e03bf1-b908-4148-8838-f54eaa369e6a-kube-api-access-tb9ds\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.512375 master-0 kubenswrapper[28149]: I0313 13:11:00.510289 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9e03bf1-b908-4148-8838-f54eaa369e6a-httpd-run\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.513644 master-0 kubenswrapper[28149]: I0313 13:11:00.513608 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9e03bf1-b908-4148-8838-f54eaa369e6a-logs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.514445 master-0 kubenswrapper[28149]: I0313 13:11:00.514419 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-combined-ca-bundle\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.515571 master-0 kubenswrapper[28149]: I0313 13:11:00.515495 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-scripts\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.516105 master-0 kubenswrapper[28149]: I0313 13:11:00.516086 28149 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 13:11:00.516196 master-0 kubenswrapper[28149]: I0313 13:11:00.516118 28149 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/dd1664ebbf7aebe13570b4d7d33b7a2c8fb2cd6894f8d3c518cd1e549d5c6ec6/globalmount\"" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.519882 master-0 kubenswrapper[28149]: I0313 13:11:00.519837 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-public-tls-certs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.524562 master-0 kubenswrapper[28149]: I0313 13:11:00.524514 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-config-data\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.545078 master-0 kubenswrapper[28149]: I0313 13:11:00.545022 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb9ds\" (UniqueName: \"kubernetes.io/projected/f9e03bf1-b908-4148-8838-f54eaa369e6a-kube-api-access-tb9ds\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:00.722531 master-0 kubenswrapper[28149]: I0313 13:11:00.722478 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862ca5ea-1489-4696-a698-ab9992caaa78" path="/var/lib/kubelet/pods/862ca5ea-1489-4696-a698-ab9992caaa78/volumes" Mar 13 13:11:00.724065 master-0 kubenswrapper[28149]: I0313 13:11:00.724019 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfc459b1-604b-46d5-be0f-792e3a66a7b1" path="/var/lib/kubelet/pods/cfc459b1-604b-46d5-be0f-792e3a66a7b1/volumes" Mar 13 13:11:01.182069 master-0 kubenswrapper[28149]: I0313 13:11:01.181959 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:11:01.382829 master-0 kubenswrapper[28149]: I0313 13:11:01.382761 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:11:02.575592 master-0 kubenswrapper[28149]: I0313 13:11:02.575538 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:02.652463 master-0 kubenswrapper[28149]: I0313 13:11:02.652339 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:02.812523 master-0 kubenswrapper[28149]: I0313 13:11:02.811727 28149 trace.go:236] Trace[818911244]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (13-Mar-2026 13:11:01.618) (total time: 1193ms): Mar 13 13:11:02.812523 master-0 kubenswrapper[28149]: Trace[818911244]: [1.193053936s] [1.193053936s] END Mar 13 13:11:13.878325 master-0 kubenswrapper[28149]: I0313 13:11:13.878080 28149 generic.go:334] "Generic (PLEG): container finished" podID="9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3" containerID="50d6bb62905d2d72748b2a1de7cb2f9566378cd11cb9c196e39eda57c5bd6748" exitCode=0 Mar 13 13:11:13.878325 master-0 kubenswrapper[28149]: I0313 13:11:13.878224 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wtgql" event={"ID":"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3","Type":"ContainerDied","Data":"50d6bb62905d2d72748b2a1de7cb2f9566378cd11cb9c196e39eda57c5bd6748"} Mar 13 13:11:13.884062 master-0 kubenswrapper[28149]: I0313 13:11:13.883882 28149 generic.go:334] "Generic (PLEG): container finished" podID="1574918a-8865-4cd3-89c5-a2e9855c8e23" containerID="4b47fa2b0cb18f2fe096c7ee9a617f03c1eec5f1d6a8e2850bf50b91498a4602" exitCode=0 Mar 13 13:11:13.884062 master-0 kubenswrapper[28149]: I0313 13:11:13.883932 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-smh2q" event={"ID":"1574918a-8865-4cd3-89c5-a2e9855c8e23","Type":"ContainerDied","Data":"4b47fa2b0cb18f2fe096c7ee9a617f03c1eec5f1d6a8e2850bf50b91498a4602"} Mar 13 13:11:18.577340 master-0 kubenswrapper[28149]: I0313 13:11:18.577272 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wtgql" Mar 13 13:11:18.588103 master-0 kubenswrapper[28149]: I0313 13:11:18.588060 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:11:18.624647 master-0 kubenswrapper[28149]: I0313 13:11:18.624177 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmrt4\" (UniqueName: \"kubernetes.io/projected/1574918a-8865-4cd3-89c5-a2e9855c8e23-kube-api-access-mmrt4\") pod \"1574918a-8865-4cd3-89c5-a2e9855c8e23\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " Mar 13 13:11:18.624647 master-0 kubenswrapper[28149]: I0313 13:11:18.624256 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-combined-ca-bundle\") pod \"1574918a-8865-4cd3-89c5-a2e9855c8e23\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " Mar 13 13:11:18.624647 master-0 kubenswrapper[28149]: I0313 13:11:18.624306 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-scripts\") pod \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " Mar 13 13:11:18.624647 master-0 kubenswrapper[28149]: I0313 13:11:18.624378 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-scripts\") pod \"1574918a-8865-4cd3-89c5-a2e9855c8e23\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " Mar 13 13:11:18.624647 master-0 kubenswrapper[28149]: I0313 13:11:18.624432 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2pkf\" (UniqueName: \"kubernetes.io/projected/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-kube-api-access-q2pkf\") pod \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " Mar 13 13:11:18.624647 master-0 kubenswrapper[28149]: I0313 13:11:18.624505 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-config-data\") pod \"1574918a-8865-4cd3-89c5-a2e9855c8e23\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " Mar 13 13:11:18.624647 master-0 kubenswrapper[28149]: I0313 13:11:18.624574 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-logs\") pod \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " Mar 13 13:11:18.624647 master-0 kubenswrapper[28149]: I0313 13:11:18.624606 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-config-data\") pod \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " Mar 13 13:11:18.625277 master-0 kubenswrapper[28149]: I0313 13:11:18.624652 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-credential-keys\") pod \"1574918a-8865-4cd3-89c5-a2e9855c8e23\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " Mar 13 13:11:18.625277 master-0 kubenswrapper[28149]: I0313 13:11:18.624742 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-fernet-keys\") pod \"1574918a-8865-4cd3-89c5-a2e9855c8e23\" (UID: \"1574918a-8865-4cd3-89c5-a2e9855c8e23\") " Mar 13 13:11:18.625277 master-0 kubenswrapper[28149]: I0313 13:11:18.624778 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-combined-ca-bundle\") pod \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\" (UID: \"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3\") " Mar 13 13:11:18.630623 master-0 kubenswrapper[28149]: I0313 13:11:18.628583 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-logs" (OuterVolumeSpecName: "logs") pod "9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3" (UID: "9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:11:18.635959 master-0 kubenswrapper[28149]: I0313 13:11:18.635882 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-scripts" (OuterVolumeSpecName: "scripts") pod "1574918a-8865-4cd3-89c5-a2e9855c8e23" (UID: "1574918a-8865-4cd3-89c5-a2e9855c8e23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:18.642151 master-0 kubenswrapper[28149]: I0313 13:11:18.642078 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1574918a-8865-4cd3-89c5-a2e9855c8e23-kube-api-access-mmrt4" (OuterVolumeSpecName: "kube-api-access-mmrt4") pod "1574918a-8865-4cd3-89c5-a2e9855c8e23" (UID: "1574918a-8865-4cd3-89c5-a2e9855c8e23"). InnerVolumeSpecName "kube-api-access-mmrt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:11:18.650968 master-0 kubenswrapper[28149]: I0313 13:11:18.647671 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-kube-api-access-q2pkf" (OuterVolumeSpecName: "kube-api-access-q2pkf") pod "9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3" (UID: "9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3"). InnerVolumeSpecName "kube-api-access-q2pkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:11:18.650968 master-0 kubenswrapper[28149]: I0313 13:11:18.650049 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-scripts" (OuterVolumeSpecName: "scripts") pod "9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3" (UID: "9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:18.656485 master-0 kubenswrapper[28149]: I0313 13:11:18.654484 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1574918a-8865-4cd3-89c5-a2e9855c8e23" (UID: "1574918a-8865-4cd3-89c5-a2e9855c8e23"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:18.656485 master-0 kubenswrapper[28149]: I0313 13:11:18.655351 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1574918a-8865-4cd3-89c5-a2e9855c8e23" (UID: "1574918a-8865-4cd3-89c5-a2e9855c8e23"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:18.679006 master-0 kubenswrapper[28149]: I0313 13:11:18.678909 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-config-data" (OuterVolumeSpecName: "config-data") pod "1574918a-8865-4cd3-89c5-a2e9855c8e23" (UID: "1574918a-8865-4cd3-89c5-a2e9855c8e23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:18.680276 master-0 kubenswrapper[28149]: I0313 13:11:18.680247 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3" (UID: "9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:18.688766 master-0 kubenswrapper[28149]: I0313 13:11:18.688718 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-config-data" (OuterVolumeSpecName: "config-data") pod "9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3" (UID: "9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:18.695855 master-0 kubenswrapper[28149]: I0313 13:11:18.695813 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1574918a-8865-4cd3-89c5-a2e9855c8e23" (UID: "1574918a-8865-4cd3-89c5-a2e9855c8e23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:18.728522 master-0 kubenswrapper[28149]: I0313 13:11:18.728423 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2pkf\" (UniqueName: \"kubernetes.io/projected/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-kube-api-access-q2pkf\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:18.728522 master-0 kubenswrapper[28149]: I0313 13:11:18.728518 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:18.728522 master-0 kubenswrapper[28149]: I0313 13:11:18.728533 28149 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:18.728522 master-0 kubenswrapper[28149]: I0313 13:11:18.728544 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:18.728522 master-0 kubenswrapper[28149]: I0313 13:11:18.728552 28149 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:18.728522 master-0 kubenswrapper[28149]: I0313 13:11:18.728562 28149 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:18.729234 master-0 kubenswrapper[28149]: I0313 13:11:18.728571 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:18.729234 master-0 kubenswrapper[28149]: I0313 13:11:18.728584 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmrt4\" (UniqueName: \"kubernetes.io/projected/1574918a-8865-4cd3-89c5-a2e9855c8e23-kube-api-access-mmrt4\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:18.729234 master-0 kubenswrapper[28149]: I0313 13:11:18.728596 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:18.729234 master-0 kubenswrapper[28149]: I0313 13:11:18.728607 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:18.729234 master-0 kubenswrapper[28149]: I0313 13:11:18.728617 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1574918a-8865-4cd3-89c5-a2e9855c8e23-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:19.085889 master-0 kubenswrapper[28149]: I0313 13:11:19.085806 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-smh2q" Mar 13 13:11:19.086117 master-0 kubenswrapper[28149]: I0313 13:11:19.085794 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-smh2q" event={"ID":"1574918a-8865-4cd3-89c5-a2e9855c8e23","Type":"ContainerDied","Data":"cafe5c7e58c3c7d74833e732e986d755d97f187d8350a30b4b6f25aa95cde146"} Mar 13 13:11:19.086117 master-0 kubenswrapper[28149]: I0313 13:11:19.085943 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cafe5c7e58c3c7d74833e732e986d755d97f187d8350a30b4b6f25aa95cde146" Mar 13 13:11:19.090125 master-0 kubenswrapper[28149]: I0313 13:11:19.090087 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wtgql" event={"ID":"9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3","Type":"ContainerDied","Data":"f3df2b72a4aed7ba300d62b5f02ffbf6412c3c256f695db7a20f0a771464bf8e"} Mar 13 13:11:19.090253 master-0 kubenswrapper[28149]: I0313 13:11:19.090148 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3df2b72a4aed7ba300d62b5f02ffbf6412c3c256f695db7a20f0a771464bf8e" Mar 13 13:11:19.090253 master-0 kubenswrapper[28149]: I0313 13:11:19.090224 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wtgql" Mar 13 13:11:20.545654 master-0 kubenswrapper[28149]: I0313 13:11:20.545582 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8b4477b4f-94nmj"] Mar 13 13:11:20.547311 master-0 kubenswrapper[28149]: E0313 13:11:20.546434 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3" containerName="placement-db-sync" Mar 13 13:11:20.547311 master-0 kubenswrapper[28149]: I0313 13:11:20.546477 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3" containerName="placement-db-sync" Mar 13 13:11:20.547311 master-0 kubenswrapper[28149]: E0313 13:11:20.546576 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1574918a-8865-4cd3-89c5-a2e9855c8e23" containerName="keystone-bootstrap" Mar 13 13:11:20.547311 master-0 kubenswrapper[28149]: I0313 13:11:20.546588 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="1574918a-8865-4cd3-89c5-a2e9855c8e23" containerName="keystone-bootstrap" Mar 13 13:11:20.547311 master-0 kubenswrapper[28149]: I0313 13:11:20.546981 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="1574918a-8865-4cd3-89c5-a2e9855c8e23" containerName="keystone-bootstrap" Mar 13 13:11:20.547311 master-0 kubenswrapper[28149]: I0313 13:11:20.547021 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d77ebfb-6652-45f8-8bfb-fe1e4344c3a3" containerName="placement-db-sync" Mar 13 13:11:20.548190 master-0 kubenswrapper[28149]: I0313 13:11:20.548167 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.569770 master-0 kubenswrapper[28149]: I0313 13:11:20.555515 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 13:11:20.569770 master-0 kubenswrapper[28149]: I0313 13:11:20.555807 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 13 13:11:20.569770 master-0 kubenswrapper[28149]: I0313 13:11:20.555918 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 13 13:11:20.569770 master-0 kubenswrapper[28149]: I0313 13:11:20.556150 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 13:11:20.569770 master-0 kubenswrapper[28149]: I0313 13:11:20.556302 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 13:11:20.626316 master-0 kubenswrapper[28149]: I0313 13:11:20.625254 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-699d7776-9kkdk"] Mar 13 13:11:20.636613 master-0 kubenswrapper[28149]: I0313 13:11:20.636572 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.642955 master-0 kubenswrapper[28149]: I0313 13:11:20.641835 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 13 13:11:20.653887 master-0 kubenswrapper[28149]: I0313 13:11:20.651842 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 13 13:11:20.654348 master-0 kubenswrapper[28149]: I0313 13:11:20.654320 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 13 13:11:20.654566 master-0 kubenswrapper[28149]: I0313 13:11:20.654341 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 13 13:11:20.687894 master-0 kubenswrapper[28149]: I0313 13:11:20.674157 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-config-data\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.687894 master-0 kubenswrapper[28149]: I0313 13:11:20.674280 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-scripts\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.687894 master-0 kubenswrapper[28149]: I0313 13:11:20.674662 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7c49\" (UniqueName: \"kubernetes.io/projected/b0d311a8-b41b-4e58-8085-eec42684fce5-kube-api-access-b7c49\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.687894 master-0 kubenswrapper[28149]: I0313 13:11:20.677264 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-internal-tls-certs\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.687894 master-0 kubenswrapper[28149]: I0313 13:11:20.677395 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-fernet-keys\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.687894 master-0 kubenswrapper[28149]: I0313 13:11:20.677579 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-credential-keys\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.687894 master-0 kubenswrapper[28149]: I0313 13:11:20.677624 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-public-tls-certs\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.687894 master-0 kubenswrapper[28149]: I0313 13:11:20.677671 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-combined-ca-bundle\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.687894 master-0 kubenswrapper[28149]: I0313 13:11:20.684582 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8b4477b4f-94nmj"] Mar 13 13:11:20.779577 master-0 kubenswrapper[28149]: I0313 13:11:20.779516 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-public-tls-certs\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.780002 master-0 kubenswrapper[28149]: I0313 13:11:20.779968 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-config-data\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.780094 master-0 kubenswrapper[28149]: I0313 13:11:20.780069 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-combined-ca-bundle\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.780161 master-0 kubenswrapper[28149]: I0313 13:11:20.780100 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-scripts\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.780227 master-0 kubenswrapper[28149]: I0313 13:11:20.780194 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zpst\" (UniqueName: \"kubernetes.io/projected/ea4701c8-792f-4a27-948e-cc2d36ad5739-kube-api-access-6zpst\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.781939 master-0 kubenswrapper[28149]: I0313 13:11:20.781254 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-699d7776-9kkdk"] Mar 13 13:11:20.784244 master-0 kubenswrapper[28149]: I0313 13:11:20.784204 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-config-data\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.797003 master-0 kubenswrapper[28149]: I0313 13:11:20.794954 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-scripts\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.797003 master-0 kubenswrapper[28149]: I0313 13:11:20.795116 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7c49\" (UniqueName: \"kubernetes.io/projected/b0d311a8-b41b-4e58-8085-eec42684fce5-kube-api-access-b7c49\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.797003 master-0 kubenswrapper[28149]: I0313 13:11:20.795460 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-config-data\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.797003 master-0 kubenswrapper[28149]: I0313 13:11:20.795975 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-internal-tls-certs\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.797003 master-0 kubenswrapper[28149]: I0313 13:11:20.796165 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea4701c8-792f-4a27-948e-cc2d36ad5739-logs\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.797003 master-0 kubenswrapper[28149]: I0313 13:11:20.796192 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-fernet-keys\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.797003 master-0 kubenswrapper[28149]: I0313 13:11:20.796210 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-internal-tls-certs\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.797003 master-0 kubenswrapper[28149]: I0313 13:11:20.796781 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-scripts\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.798248 master-0 kubenswrapper[28149]: I0313 13:11:20.797903 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-credential-keys\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.798248 master-0 kubenswrapper[28149]: I0313 13:11:20.797965 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-public-tls-certs\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.798248 master-0 kubenswrapper[28149]: I0313 13:11:20.798020 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-combined-ca-bundle\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.806952 master-0 kubenswrapper[28149]: I0313 13:11:20.806802 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-internal-tls-certs\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.806952 master-0 kubenswrapper[28149]: I0313 13:11:20.806850 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-combined-ca-bundle\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.811881 master-0 kubenswrapper[28149]: I0313 13:11:20.811830 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-fernet-keys\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.815375 master-0 kubenswrapper[28149]: I0313 13:11:20.815292 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-public-tls-certs\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.816792 master-0 kubenswrapper[28149]: I0313 13:11:20.815954 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0d311a8-b41b-4e58-8085-eec42684fce5-credential-keys\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.819999 master-0 kubenswrapper[28149]: I0313 13:11:20.819967 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7c49\" (UniqueName: \"kubernetes.io/projected/b0d311a8-b41b-4e58-8085-eec42684fce5-kube-api-access-b7c49\") pod \"keystone-8b4477b4f-94nmj\" (UID: \"b0d311a8-b41b-4e58-8085-eec42684fce5\") " pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:20.886217 master-0 kubenswrapper[28149]: I0313 13:11:20.886112 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b96699696-vzrgx"] Mar 13 13:11:20.896864 master-0 kubenswrapper[28149]: I0313 13:11:20.892984 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:20.909944 master-0 kubenswrapper[28149]: I0313 13:11:20.905869 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b96699696-vzrgx"] Mar 13 13:11:20.909944 master-0 kubenswrapper[28149]: I0313 13:11:20.906208 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-combined-ca-bundle\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.909944 master-0 kubenswrapper[28149]: I0313 13:11:20.906315 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zpst\" (UniqueName: \"kubernetes.io/projected/ea4701c8-792f-4a27-948e-cc2d36ad5739-kube-api-access-6zpst\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.909944 master-0 kubenswrapper[28149]: I0313 13:11:20.906370 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-scripts\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.909944 master-0 kubenswrapper[28149]: I0313 13:11:20.906425 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-config-data\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.909944 master-0 kubenswrapper[28149]: I0313 13:11:20.906477 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea4701c8-792f-4a27-948e-cc2d36ad5739-logs\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.909944 master-0 kubenswrapper[28149]: I0313 13:11:20.909003 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-internal-tls-certs\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.909944 master-0 kubenswrapper[28149]: I0313 13:11:20.909200 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-public-tls-certs\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.910527 master-0 kubenswrapper[28149]: I0313 13:11:20.910366 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea4701c8-792f-4a27-948e-cc2d36ad5739-logs\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.921899 master-0 kubenswrapper[28149]: I0313 13:11:20.918949 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-scripts\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.932791 master-0 kubenswrapper[28149]: I0313 13:11:20.932750 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-public-tls-certs\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:20.933342 master-0 kubenswrapper[28149]: I0313 13:11:20.933159 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-combined-ca-bundle\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:21.203307 master-0 kubenswrapper[28149]: I0313 13:11:21.201643 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:21.203307 master-0 kubenswrapper[28149]: I0313 13:11:21.203154 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpjzk\" (UniqueName: \"kubernetes.io/projected/1b787bf9-478d-410c-88e1-08e5ac30d5b6-kube-api-access-zpjzk\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.203307 master-0 kubenswrapper[28149]: I0313 13:11:21.203260 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b787bf9-478d-410c-88e1-08e5ac30d5b6-scripts\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.203655 master-0 kubenswrapper[28149]: I0313 13:11:21.203642 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b787bf9-478d-410c-88e1-08e5ac30d5b6-logs\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.203816 master-0 kubenswrapper[28149]: I0313 13:11:21.203703 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b787bf9-478d-410c-88e1-08e5ac30d5b6-config-data\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.203886 master-0 kubenswrapper[28149]: I0313 13:11:21.203862 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b787bf9-478d-410c-88e1-08e5ac30d5b6-combined-ca-bundle\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.203925 master-0 kubenswrapper[28149]: I0313 13:11:21.203905 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b787bf9-478d-410c-88e1-08e5ac30d5b6-public-tls-certs\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.204165 master-0 kubenswrapper[28149]: I0313 13:11:21.203983 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b787bf9-478d-410c-88e1-08e5ac30d5b6-internal-tls-certs\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.212980 master-0 kubenswrapper[28149]: I0313 13:11:21.211580 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zpst\" (UniqueName: \"kubernetes.io/projected/ea4701c8-792f-4a27-948e-cc2d36ad5739-kube-api-access-6zpst\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:21.212980 master-0 kubenswrapper[28149]: I0313 13:11:21.212067 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-internal-tls-certs\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:21.234767 master-0 kubenswrapper[28149]: I0313 13:11:21.229790 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-config-data\") pod \"placement-699d7776-9kkdk\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:21.311683 master-0 kubenswrapper[28149]: I0313 13:11:21.309884 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b787bf9-478d-410c-88e1-08e5ac30d5b6-internal-tls-certs\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.311683 master-0 kubenswrapper[28149]: I0313 13:11:21.309997 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpjzk\" (UniqueName: \"kubernetes.io/projected/1b787bf9-478d-410c-88e1-08e5ac30d5b6-kube-api-access-zpjzk\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.311683 master-0 kubenswrapper[28149]: I0313 13:11:21.310218 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b787bf9-478d-410c-88e1-08e5ac30d5b6-scripts\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.311683 master-0 kubenswrapper[28149]: I0313 13:11:21.310244 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b787bf9-478d-410c-88e1-08e5ac30d5b6-logs\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.311683 master-0 kubenswrapper[28149]: I0313 13:11:21.310275 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b787bf9-478d-410c-88e1-08e5ac30d5b6-config-data\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.311683 master-0 kubenswrapper[28149]: I0313 13:11:21.310420 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b787bf9-478d-410c-88e1-08e5ac30d5b6-combined-ca-bundle\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.311683 master-0 kubenswrapper[28149]: I0313 13:11:21.310443 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b787bf9-478d-410c-88e1-08e5ac30d5b6-public-tls-certs\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.314660 master-0 kubenswrapper[28149]: I0313 13:11:21.314623 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b787bf9-478d-410c-88e1-08e5ac30d5b6-logs\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.316842 master-0 kubenswrapper[28149]: I0313 13:11:21.315603 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b787bf9-478d-410c-88e1-08e5ac30d5b6-public-tls-certs\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.327063 master-0 kubenswrapper[28149]: I0313 13:11:21.327006 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b787bf9-478d-410c-88e1-08e5ac30d5b6-scripts\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.327310 master-0 kubenswrapper[28149]: I0313 13:11:21.327273 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b787bf9-478d-410c-88e1-08e5ac30d5b6-combined-ca-bundle\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.327888 master-0 kubenswrapper[28149]: I0313 13:11:21.327612 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b787bf9-478d-410c-88e1-08e5ac30d5b6-config-data\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.335740 master-0 kubenswrapper[28149]: I0313 13:11:21.335685 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b787bf9-478d-410c-88e1-08e5ac30d5b6-internal-tls-certs\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.355240 master-0 kubenswrapper[28149]: I0313 13:11:21.355174 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:21.366169 master-0 kubenswrapper[28149]: I0313 13:11:21.365497 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpjzk\" (UniqueName: \"kubernetes.io/projected/1b787bf9-478d-410c-88e1-08e5ac30d5b6-kube-api-access-zpjzk\") pod \"placement-6b96699696-vzrgx\" (UID: \"1b787bf9-478d-410c-88e1-08e5ac30d5b6\") " pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.445294 master-0 kubenswrapper[28149]: W0313 13:11:21.441574 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9e03bf1_b908_4148_8838_f54eaa369e6a.slice/crio-5c459f7a4847b0570e04c3439fbf5e0a9b56ca2eb0a245e11864fd5eb5f3425b WatchSource:0}: Error finding container 5c459f7a4847b0570e04c3439fbf5e0a9b56ca2eb0a245e11864fd5eb5f3425b: Status 404 returned error can't find the container with id 5c459f7a4847b0570e04c3439fbf5e0a9b56ca2eb0a245e11864fd5eb5f3425b Mar 13 13:11:21.445294 master-0 kubenswrapper[28149]: I0313 13:11:21.444057 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:11:21.574358 master-0 kubenswrapper[28149]: I0313 13:11:21.555825 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:21.876060 master-0 kubenswrapper[28149]: I0313 13:11:21.875982 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8b4477b4f-94nmj"] Mar 13 13:11:21.933229 master-0 kubenswrapper[28149]: I0313 13:11:21.932687 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:11:22.482106 master-0 kubenswrapper[28149]: I0313 13:11:22.482025 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-external-api-0" event={"ID":"f9e03bf1-b908-4148-8838-f54eaa369e6a","Type":"ContainerStarted","Data":"5c459f7a4847b0570e04c3439fbf5e0a9b56ca2eb0a245e11864fd5eb5f3425b"} Mar 13 13:11:22.486571 master-0 kubenswrapper[28149]: I0313 13:11:22.486509 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8b4477b4f-94nmj" event={"ID":"b0d311a8-b41b-4e58-8085-eec42684fce5","Type":"ContainerStarted","Data":"59bfad9811cbd5544b8edbe75026370cf7beb89417ad0c57a8dd3d8394aa850d"} Mar 13 13:11:22.504066 master-0 kubenswrapper[28149]: I0313 13:11:22.503951 28149 generic.go:334] "Generic (PLEG): container finished" podID="0b7e43c1-e19e-4691-a5b4-2a2197764944" containerID="1564d23693e24fc0dfb0f09627231d7cfcd94c03db354c86b95875eb37559ef9" exitCode=0 Mar 13 13:11:22.505156 master-0 kubenswrapper[28149]: I0313 13:11:22.504491 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-h8h9t" event={"ID":"0b7e43c1-e19e-4691-a5b4-2a2197764944","Type":"ContainerDied","Data":"1564d23693e24fc0dfb0f09627231d7cfcd94c03db354c86b95875eb37559ef9"} Mar 13 13:11:22.522573 master-0 kubenswrapper[28149]: I0313 13:11:22.514453 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-699d7776-9kkdk"] Mar 13 13:11:22.539930 master-0 kubenswrapper[28149]: I0313 13:11:22.529780 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-internal-api-0" event={"ID":"611eba2b-39d1-43b8-bdce-7b7c5436180c","Type":"ContainerStarted","Data":"c2864960ae9389fb085c5d1a6210d7d996524f4139d5a6368982290afff235ab"} Mar 13 13:11:22.921184 master-0 kubenswrapper[28149]: I0313 13:11:22.920519 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b96699696-vzrgx"] Mar 13 13:11:23.554629 master-0 kubenswrapper[28149]: I0313 13:11:23.551746 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-internal-api-0" event={"ID":"611eba2b-39d1-43b8-bdce-7b7c5436180c","Type":"ContainerStarted","Data":"3605fb008c93b616a59c49c14ce99dd33736be58cdf499b88eb71ef7ba777d9a"} Mar 13 13:11:23.555164 master-0 kubenswrapper[28149]: I0313 13:11:23.555002 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b96699696-vzrgx" event={"ID":"1b787bf9-478d-410c-88e1-08e5ac30d5b6","Type":"ContainerStarted","Data":"322519bf9a8aa4325a633df5714ecded294a2365b2a8256d1bee346d141894f1"} Mar 13 13:11:23.555164 master-0 kubenswrapper[28149]: I0313 13:11:23.555044 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b96699696-vzrgx" event={"ID":"1b787bf9-478d-410c-88e1-08e5ac30d5b6","Type":"ContainerStarted","Data":"105b694ef5e5b520b43eb7a8acc016df68d82ac6a451ddcb3e6f4d7543e450e3"} Mar 13 13:11:23.569165 master-0 kubenswrapper[28149]: I0313 13:11:23.567213 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-699d7776-9kkdk" event={"ID":"ea4701c8-792f-4a27-948e-cc2d36ad5739","Type":"ContainerStarted","Data":"0de68570e0c25f556b25a0d514a40bf6e6b23fdd944c31b42c9a5dee0c0f377f"} Mar 13 13:11:23.569165 master-0 kubenswrapper[28149]: I0313 13:11:23.567274 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-699d7776-9kkdk" event={"ID":"ea4701c8-792f-4a27-948e-cc2d36ad5739","Type":"ContainerStarted","Data":"d83b312654b71814018bf82aefcc44782c1b8a50ca051dac7c42951c264b572f"} Mar 13 13:11:23.569165 master-0 kubenswrapper[28149]: I0313 13:11:23.567289 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-699d7776-9kkdk" event={"ID":"ea4701c8-792f-4a27-948e-cc2d36ad5739","Type":"ContainerStarted","Data":"07fe23b1b5b47b55f113f422e7f7413d0f49d322ce63394c42771441224379f3"} Mar 13 13:11:23.569165 master-0 kubenswrapper[28149]: I0313 13:11:23.568245 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:23.569165 master-0 kubenswrapper[28149]: I0313 13:11:23.568304 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-699d7776-9kkdk" Mar 13 13:11:23.580167 master-0 kubenswrapper[28149]: I0313 13:11:23.576591 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-db-sync-tv65l" event={"ID":"95eb9b96-2f27-4701-b62d-b7026cb009ec","Type":"ContainerStarted","Data":"46a940edb344b76f7a7fc8d09b3e0ad7820cc5058ee1f1ba7ab6eab240f4b559"} Mar 13 13:11:23.585957 master-0 kubenswrapper[28149]: I0313 13:11:23.581514 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-external-api-0" event={"ID":"f9e03bf1-b908-4148-8838-f54eaa369e6a","Type":"ContainerStarted","Data":"b735b158ae0f1f81c167b2e3ec4bb07208ae9e3e1a523919c59da19d0ac89b38"} Mar 13 13:11:23.585957 master-0 kubenswrapper[28149]: I0313 13:11:23.584253 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8b4477b4f-94nmj" event={"ID":"b0d311a8-b41b-4e58-8085-eec42684fce5","Type":"ContainerStarted","Data":"7b8811cb85aa77f56fcdd98a28c11ba979c0c2dd828b64ea110f7f170244898c"} Mar 13 13:11:23.592166 master-0 kubenswrapper[28149]: I0313 13:11:23.588471 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-h8h9t" event={"ID":"0b7e43c1-e19e-4691-a5b4-2a2197764944","Type":"ContainerStarted","Data":"543a1839edafb69db2d7d7f2f4c74576b687c77c31b8c1e238dd942ea7d7c4ba"} Mar 13 13:11:23.664233 master-0 kubenswrapper[28149]: I0313 13:11:23.661836 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-699d7776-9kkdk" podStartSLOduration=3.661816061 podStartE2EDuration="3.661816061s" podCreationTimestamp="2026-03-13 13:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:11:23.631507474 +0000 UTC m=+1057.284972643" watchObservedRunningTime="2026-03-13 13:11:23.661816061 +0000 UTC m=+1057.315281220" Mar 13 13:11:23.678157 master-0 kubenswrapper[28149]: I0313 13:11:23.674739 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-h8h9t" podStartSLOduration=5.736526682 podStartE2EDuration="30.674717678s" podCreationTimestamp="2026-03-13 13:10:53 +0000 UTC" firstStartedPulling="2026-03-13 13:10:55.68144314 +0000 UTC m=+1029.334908299" lastFinishedPulling="2026-03-13 13:11:20.619634136 +0000 UTC m=+1054.273099295" observedRunningTime="2026-03-13 13:11:23.669692002 +0000 UTC m=+1057.323157171" watchObservedRunningTime="2026-03-13 13:11:23.674717678 +0000 UTC m=+1057.328182847" Mar 13 13:11:23.762160 master-0 kubenswrapper[28149]: I0313 13:11:23.759567 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-8b4477b4f-94nmj" podStartSLOduration=3.759548422 podStartE2EDuration="3.759548422s" podCreationTimestamp="2026-03-13 13:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:11:23.716283807 +0000 UTC m=+1057.369748966" watchObservedRunningTime="2026-03-13 13:11:23.759548422 +0000 UTC m=+1057.413013581" Mar 13 13:11:23.782157 master-0 kubenswrapper[28149]: I0313 13:11:23.777131 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ee0a2-db-sync-tv65l" podStartSLOduration=7.9620268450000005 podStartE2EDuration="42.777098075s" podCreationTimestamp="2026-03-13 13:10:41 +0000 UTC" firstStartedPulling="2026-03-13 13:10:45.656486137 +0000 UTC m=+1019.309951296" lastFinishedPulling="2026-03-13 13:11:20.471557367 +0000 UTC m=+1054.125022526" observedRunningTime="2026-03-13 13:11:23.738182197 +0000 UTC m=+1057.391647356" watchObservedRunningTime="2026-03-13 13:11:23.777098075 +0000 UTC m=+1057.430563234" Mar 13 13:11:24.607556 master-0 kubenswrapper[28149]: I0313 13:11:24.607482 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-internal-api-0" event={"ID":"611eba2b-39d1-43b8-bdce-7b7c5436180c","Type":"ContainerStarted","Data":"7705e241a1083d4cd9858d6d7c541bec846e153fa8108212316ee24486559c75"} Mar 13 13:11:24.611259 master-0 kubenswrapper[28149]: I0313 13:11:24.611199 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b96699696-vzrgx" event={"ID":"1b787bf9-478d-410c-88e1-08e5ac30d5b6","Type":"ContainerStarted","Data":"676b354d8467aee6905e3840461008cad6305667a01eda06cfc7dfeb69f891e5"} Mar 13 13:11:24.612060 master-0 kubenswrapper[28149]: I0313 13:11:24.612018 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:24.612871 master-0 kubenswrapper[28149]: I0313 13:11:24.612842 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:24.615344 master-0 kubenswrapper[28149]: I0313 13:11:24.615279 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-external-api-0" event={"ID":"f9e03bf1-b908-4148-8838-f54eaa369e6a","Type":"ContainerStarted","Data":"fa6e60ae3af7814543f74294facd270e11b151ddfa03c5dc99c77d7ed6414b4a"} Mar 13 13:11:24.616885 master-0 kubenswrapper[28149]: I0313 13:11:24.616861 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:11:25.073289 master-0 kubenswrapper[28149]: I0313 13:11:25.073180 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b96699696-vzrgx" podStartSLOduration=5.073105785 podStartE2EDuration="5.073105785s" podCreationTimestamp="2026-03-13 13:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:11:25.049166269 +0000 UTC m=+1058.702631438" watchObservedRunningTime="2026-03-13 13:11:25.073105785 +0000 UTC m=+1058.726570944" Mar 13 13:11:25.256163 master-0 kubenswrapper[28149]: I0313 13:11:25.253702 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e6fbd-default-external-api-0" podStartSLOduration=25.253673808 podStartE2EDuration="25.253673808s" podCreationTimestamp="2026-03-13 13:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:11:25.232703544 +0000 UTC m=+1058.886168713" watchObservedRunningTime="2026-03-13 13:11:25.253673808 +0000 UTC m=+1058.907138977" Mar 13 13:11:25.786684 master-0 kubenswrapper[28149]: I0313 13:11:25.786588 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e6fbd-default-internal-api-0" podStartSLOduration=26.786560462 podStartE2EDuration="26.786560462s" podCreationTimestamp="2026-03-13 13:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:11:25.776614664 +0000 UTC m=+1059.430079823" watchObservedRunningTime="2026-03-13 13:11:25.786560462 +0000 UTC m=+1059.440025621" Mar 13 13:11:27.778061 master-0 kubenswrapper[28149]: I0313 13:11:27.777987 28149 generic.go:334] "Generic (PLEG): container finished" podID="c4483640-14d3-42de-bad4-48fe97f66cad" containerID="0e024204c925f81da8f7c22fb0191b551d163019d22ddf6e0606276ae2b14bf3" exitCode=0 Mar 13 13:11:27.778061 master-0 kubenswrapper[28149]: I0313 13:11:27.778046 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kdkvp" event={"ID":"c4483640-14d3-42de-bad4-48fe97f66cad","Type":"ContainerDied","Data":"0e024204c925f81da8f7c22fb0191b551d163019d22ddf6e0606276ae2b14bf3"} Mar 13 13:11:29.596613 master-0 kubenswrapper[28149]: I0313 13:11:29.596501 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kdkvp" Mar 13 13:11:29.689265 master-0 kubenswrapper[28149]: I0313 13:11:29.689112 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4483640-14d3-42de-bad4-48fe97f66cad-config\") pod \"c4483640-14d3-42de-bad4-48fe97f66cad\" (UID: \"c4483640-14d3-42de-bad4-48fe97f66cad\") " Mar 13 13:11:29.689265 master-0 kubenswrapper[28149]: I0313 13:11:29.689282 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfq2h\" (UniqueName: \"kubernetes.io/projected/c4483640-14d3-42de-bad4-48fe97f66cad-kube-api-access-jfq2h\") pod \"c4483640-14d3-42de-bad4-48fe97f66cad\" (UID: \"c4483640-14d3-42de-bad4-48fe97f66cad\") " Mar 13 13:11:29.689610 master-0 kubenswrapper[28149]: I0313 13:11:29.689338 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4483640-14d3-42de-bad4-48fe97f66cad-combined-ca-bundle\") pod \"c4483640-14d3-42de-bad4-48fe97f66cad\" (UID: \"c4483640-14d3-42de-bad4-48fe97f66cad\") " Mar 13 13:11:29.717278 master-0 kubenswrapper[28149]: I0313 13:11:29.716681 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4483640-14d3-42de-bad4-48fe97f66cad-kube-api-access-jfq2h" (OuterVolumeSpecName: "kube-api-access-jfq2h") pod "c4483640-14d3-42de-bad4-48fe97f66cad" (UID: "c4483640-14d3-42de-bad4-48fe97f66cad"). InnerVolumeSpecName "kube-api-access-jfq2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:11:29.718706 master-0 kubenswrapper[28149]: I0313 13:11:29.718646 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4483640-14d3-42de-bad4-48fe97f66cad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4483640-14d3-42de-bad4-48fe97f66cad" (UID: "c4483640-14d3-42de-bad4-48fe97f66cad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:29.728314 master-0 kubenswrapper[28149]: I0313 13:11:29.728224 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4483640-14d3-42de-bad4-48fe97f66cad-config" (OuterVolumeSpecName: "config") pod "c4483640-14d3-42de-bad4-48fe97f66cad" (UID: "c4483640-14d3-42de-bad4-48fe97f66cad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:29.795477 master-0 kubenswrapper[28149]: I0313 13:11:29.795334 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4483640-14d3-42de-bad4-48fe97f66cad-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:29.795477 master-0 kubenswrapper[28149]: I0313 13:11:29.795374 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfq2h\" (UniqueName: \"kubernetes.io/projected/c4483640-14d3-42de-bad4-48fe97f66cad-kube-api-access-jfq2h\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:29.795477 master-0 kubenswrapper[28149]: I0313 13:11:29.795390 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4483640-14d3-42de-bad4-48fe97f66cad-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:29.806578 master-0 kubenswrapper[28149]: I0313 13:11:29.806523 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kdkvp" event={"ID":"c4483640-14d3-42de-bad4-48fe97f66cad","Type":"ContainerDied","Data":"6b685ef367473edbbfca91901f442ef04856cb5bb7adfab4f42d001edf21e487"} Mar 13 13:11:29.806578 master-0 kubenswrapper[28149]: I0313 13:11:29.806567 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b685ef367473edbbfca91901f442ef04856cb5bb7adfab4f42d001edf21e487" Mar 13 13:11:29.806952 master-0 kubenswrapper[28149]: I0313 13:11:29.806637 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kdkvp" Mar 13 13:11:31.602487 master-0 kubenswrapper[28149]: I0313 13:11:31.602431 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:11:31.603059 master-0 kubenswrapper[28149]: I0313 13:11:31.603044 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:11:31.603165 master-0 kubenswrapper[28149]: I0313 13:11:31.603153 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:11:31.603256 master-0 kubenswrapper[28149]: I0313 13:11:31.603246 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:11:31.647903 master-0 kubenswrapper[28149]: I0313 13:11:31.647557 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:11:31.661889 master-0 kubenswrapper[28149]: I0313 13:11:31.661767 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:11:32.476401 master-0 kubenswrapper[28149]: I0313 13:11:32.476187 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cf64fcfbc-mg9tl"] Mar 13 13:11:32.477004 master-0 kubenswrapper[28149]: E0313 13:11:32.476971 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4483640-14d3-42de-bad4-48fe97f66cad" containerName="neutron-db-sync" Mar 13 13:11:32.477004 master-0 kubenswrapper[28149]: I0313 13:11:32.476996 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4483640-14d3-42de-bad4-48fe97f66cad" containerName="neutron-db-sync" Mar 13 13:11:32.477377 master-0 kubenswrapper[28149]: I0313 13:11:32.477347 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4483640-14d3-42de-bad4-48fe97f66cad" containerName="neutron-db-sync" Mar 13 13:11:32.479914 master-0 kubenswrapper[28149]: I0313 13:11:32.479467 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:32.625174 master-0 kubenswrapper[28149]: I0313 13:11:32.618633 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf64fcfbc-mg9tl"] Mar 13 13:11:32.637242 master-0 kubenswrapper[28149]: I0313 13:11:32.632655 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cqrb\" (UniqueName: \"kubernetes.io/projected/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-kube-api-access-6cqrb\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:32.637242 master-0 kubenswrapper[28149]: I0313 13:11:32.632739 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-dns-swift-storage-0\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:32.637242 master-0 kubenswrapper[28149]: I0313 13:11:32.632798 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-ovsdbserver-nb\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:32.637242 master-0 kubenswrapper[28149]: I0313 13:11:32.632882 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-ovsdbserver-sb\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:32.637242 master-0 kubenswrapper[28149]: I0313 13:11:32.632993 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-config\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:32.637242 master-0 kubenswrapper[28149]: I0313 13:11:32.633066 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-dns-svc\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:32.652361 master-0 kubenswrapper[28149]: I0313 13:11:32.652282 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:32.652361 master-0 kubenswrapper[28149]: I0313 13:11:32.652364 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:32.652361 master-0 kubenswrapper[28149]: I0313 13:11:32.652378 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:32.652840 master-0 kubenswrapper[28149]: I0313 13:11:32.652801 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:33.004493 master-0 kubenswrapper[28149]: I0313 13:11:33.004423 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-config\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:33.004812 master-0 kubenswrapper[28149]: I0313 13:11:33.004613 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-dns-svc\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:33.004812 master-0 kubenswrapper[28149]: I0313 13:11:33.004752 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cqrb\" (UniqueName: \"kubernetes.io/projected/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-kube-api-access-6cqrb\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:33.004941 master-0 kubenswrapper[28149]: I0313 13:11:33.004834 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-dns-swift-storage-0\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:33.004941 master-0 kubenswrapper[28149]: I0313 13:11:33.004900 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-ovsdbserver-nb\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:33.005293 master-0 kubenswrapper[28149]: I0313 13:11:33.005155 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-ovsdbserver-sb\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:33.007925 master-0 kubenswrapper[28149]: I0313 13:11:33.006402 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-dns-svc\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:33.011889 master-0 kubenswrapper[28149]: I0313 13:11:33.011832 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-dns-swift-storage-0\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:33.025037 master-0 kubenswrapper[28149]: I0313 13:11:33.018715 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-config\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:33.025037 master-0 kubenswrapper[28149]: I0313 13:11:33.023560 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-ovsdbserver-sb\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:33.055294 master-0 kubenswrapper[28149]: I0313 13:11:33.050359 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:33.069435 master-0 kubenswrapper[28149]: I0313 13:11:33.059243 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-ovsdbserver-nb\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:33.069435 master-0 kubenswrapper[28149]: I0313 13:11:33.062792 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:33.595930 master-0 kubenswrapper[28149]: I0313 13:11:33.595842 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-768869957b-ffkcl"] Mar 13 13:11:33.604982 master-0 kubenswrapper[28149]: I0313 13:11:33.604930 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:33.620044 master-0 kubenswrapper[28149]: I0313 13:11:33.619990 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 13 13:11:33.620388 master-0 kubenswrapper[28149]: I0313 13:11:33.620359 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 13 13:11:33.620549 master-0 kubenswrapper[28149]: I0313 13:11:33.620523 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 13 13:11:33.626254 master-0 kubenswrapper[28149]: I0313 13:11:33.625870 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cqrb\" (UniqueName: \"kubernetes.io/projected/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-kube-api-access-6cqrb\") pod \"dnsmasq-dns-6cf64fcfbc-mg9tl\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:33.657632 master-0 kubenswrapper[28149]: I0313 13:11:33.657159 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-768869957b-ffkcl"] Mar 13 13:11:33.712806 master-0 kubenswrapper[28149]: I0313 13:11:33.712727 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:34.294641 master-0 kubenswrapper[28149]: I0313 13:11:34.292630 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-ovndb-tls-certs\") pod \"neutron-768869957b-ffkcl\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:34.294641 master-0 kubenswrapper[28149]: I0313 13:11:34.292775 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-httpd-config\") pod \"neutron-768869957b-ffkcl\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:34.294641 master-0 kubenswrapper[28149]: I0313 13:11:34.292846 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l2k2\" (UniqueName: \"kubernetes.io/projected/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-kube-api-access-5l2k2\") pod \"neutron-768869957b-ffkcl\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:34.294641 master-0 kubenswrapper[28149]: I0313 13:11:34.292910 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-config\") pod \"neutron-768869957b-ffkcl\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:34.300103 master-0 kubenswrapper[28149]: I0313 13:11:34.300025 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-combined-ca-bundle\") pod \"neutron-768869957b-ffkcl\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:34.402871 master-0 kubenswrapper[28149]: I0313 13:11:34.402811 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-ovndb-tls-certs\") pod \"neutron-768869957b-ffkcl\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:34.403252 master-0 kubenswrapper[28149]: I0313 13:11:34.403231 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-httpd-config\") pod \"neutron-768869957b-ffkcl\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:34.403373 master-0 kubenswrapper[28149]: I0313 13:11:34.403358 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l2k2\" (UniqueName: \"kubernetes.io/projected/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-kube-api-access-5l2k2\") pod \"neutron-768869957b-ffkcl\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:34.403502 master-0 kubenswrapper[28149]: I0313 13:11:34.403486 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-config\") pod \"neutron-768869957b-ffkcl\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:34.406983 master-0 kubenswrapper[28149]: I0313 13:11:34.403696 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-combined-ca-bundle\") pod \"neutron-768869957b-ffkcl\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:34.408860 master-0 kubenswrapper[28149]: I0313 13:11:34.408823 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-httpd-config\") pod \"neutron-768869957b-ffkcl\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:34.412903 master-0 kubenswrapper[28149]: I0313 13:11:34.412870 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-combined-ca-bundle\") pod \"neutron-768869957b-ffkcl\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:34.416824 master-0 kubenswrapper[28149]: I0313 13:11:34.416745 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-ovndb-tls-certs\") pod \"neutron-768869957b-ffkcl\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:34.421879 master-0 kubenswrapper[28149]: I0313 13:11:34.421825 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-config\") pod \"neutron-768869957b-ffkcl\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:34.434243 master-0 kubenswrapper[28149]: I0313 13:11:34.434107 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l2k2\" (UniqueName: \"kubernetes.io/projected/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-kube-api-access-5l2k2\") pod \"neutron-768869957b-ffkcl\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:34.497678 master-0 kubenswrapper[28149]: I0313 13:11:34.495188 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf64fcfbc-mg9tl"] Mar 13 13:11:34.606670 master-0 kubenswrapper[28149]: I0313 13:11:34.606592 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:35.391108 master-0 kubenswrapper[28149]: W0313 13:11:35.391050 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8579a3e_7e92_42d9_b21f_6339bc1ebb4f.slice/crio-bf016be6458a18e3342705fea2bcb0451634318fa601a0baea748d184bf63547 WatchSource:0}: Error finding container bf016be6458a18e3342705fea2bcb0451634318fa601a0baea748d184bf63547: Status 404 returned error can't find the container with id bf016be6458a18e3342705fea2bcb0451634318fa601a0baea748d184bf63547 Mar 13 13:11:35.394680 master-0 kubenswrapper[28149]: I0313 13:11:35.394635 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-768869957b-ffkcl"] Mar 13 13:11:35.504433 master-0 kubenswrapper[28149]: I0313 13:11:35.503840 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-768869957b-ffkcl" event={"ID":"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f","Type":"ContainerStarted","Data":"bf016be6458a18e3342705fea2bcb0451634318fa601a0baea748d184bf63547"} Mar 13 13:11:35.507230 master-0 kubenswrapper[28149]: I0313 13:11:35.507191 28149 generic.go:334] "Generic (PLEG): container finished" podID="f4d26441-5029-4cf0-9ef3-cba4ed2390e2" containerID="6ec533589ef5035aea6ec3a82dbb8b38fc7e1f6902182907d154108f40b1c4fb" exitCode=0 Mar 13 13:11:35.507375 master-0 kubenswrapper[28149]: I0313 13:11:35.507236 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" event={"ID":"f4d26441-5029-4cf0-9ef3-cba4ed2390e2","Type":"ContainerDied","Data":"6ec533589ef5035aea6ec3a82dbb8b38fc7e1f6902182907d154108f40b1c4fb"} Mar 13 13:11:35.507375 master-0 kubenswrapper[28149]: I0313 13:11:35.507256 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" event={"ID":"f4d26441-5029-4cf0-9ef3-cba4ed2390e2","Type":"ContainerStarted","Data":"7be309b8464825f7be6066ab4972a4af21ae91870f46f8734f71e7024db08c46"} Mar 13 13:11:36.519964 master-0 kubenswrapper[28149]: I0313 13:11:36.519911 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-768869957b-ffkcl" event={"ID":"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f","Type":"ContainerStarted","Data":"656b9643de464c1cf24dc958794267f1f16c7713d8cd39047d8a4b7430c00e0f"} Mar 13 13:11:36.519964 master-0 kubenswrapper[28149]: I0313 13:11:36.519959 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-768869957b-ffkcl" event={"ID":"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f","Type":"ContainerStarted","Data":"12eeb73a39f53d8e14eda8ec5a01ccf4ca5f504668906bb2b70963fdeddd747e"} Mar 13 13:11:36.520672 master-0 kubenswrapper[28149]: I0313 13:11:36.520017 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:11:36.523107 master-0 kubenswrapper[28149]: I0313 13:11:36.523005 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" event={"ID":"f4d26441-5029-4cf0-9ef3-cba4ed2390e2","Type":"ContainerStarted","Data":"1c1c82f32a88e93bb5193996990824a47c55ca0804e2af53e07b65c6c5e5849a"} Mar 13 13:11:36.523247 master-0 kubenswrapper[28149]: I0313 13:11:36.523169 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:36.542061 master-0 kubenswrapper[28149]: I0313 13:11:36.541984 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-768869957b-ffkcl" podStartSLOduration=4.54195645 podStartE2EDuration="4.54195645s" podCreationTimestamp="2026-03-13 13:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:11:36.54158992 +0000 UTC m=+1070.195055099" watchObservedRunningTime="2026-03-13 13:11:36.54195645 +0000 UTC m=+1070.195421609" Mar 13 13:11:36.590985 master-0 kubenswrapper[28149]: I0313 13:11:36.590491 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" podStartSLOduration=5.590461596 podStartE2EDuration="5.590461596s" podCreationTimestamp="2026-03-13 13:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:11:36.576056159 +0000 UTC m=+1070.229521318" watchObservedRunningTime="2026-03-13 13:11:36.590461596 +0000 UTC m=+1070.243926755" Mar 13 13:11:36.805089 master-0 kubenswrapper[28149]: I0313 13:11:36.804974 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:36.805437 master-0 kubenswrapper[28149]: I0313 13:11:36.805124 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 13:11:36.873899 master-0 kubenswrapper[28149]: I0313 13:11:36.873840 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:11:36.874794 master-0 kubenswrapper[28149]: I0313 13:11:36.873979 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 13:11:36.957853 master-0 kubenswrapper[28149]: I0313 13:11:36.957598 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:11:37.343819 master-0 kubenswrapper[28149]: I0313 13:11:37.343736 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:11:37.865238 master-0 kubenswrapper[28149]: E0313 13:11:37.863957 28149 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95eb9b96_2f27_4701_b62d_b7026cb009ec.slice/crio-46a940edb344b76f7a7fc8d09b3e0ad7820cc5058ee1f1ba7ab6eab240f4b559.scope\": RecentStats: unable to find data in memory cache]" Mar 13 13:11:37.865238 master-0 kubenswrapper[28149]: E0313 13:11:37.865033 28149 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95eb9b96_2f27_4701_b62d_b7026cb009ec.slice/crio-conmon-46a940edb344b76f7a7fc8d09b3e0ad7820cc5058ee1f1ba7ab6eab240f4b559.scope\": RecentStats: unable to find data in memory cache]" Mar 13 13:11:37.958226 master-0 kubenswrapper[28149]: I0313 13:11:37.957206 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-78f868d9fc-8d9cf"] Mar 13 13:11:37.961573 master-0 kubenswrapper[28149]: I0313 13:11:37.959557 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:37.965159 master-0 kubenswrapper[28149]: I0313 13:11:37.963724 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 13 13:11:37.965159 master-0 kubenswrapper[28149]: I0313 13:11:37.963986 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 13 13:11:37.991402 master-0 kubenswrapper[28149]: I0313 13:11:37.989053 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78f868d9fc-8d9cf"] Mar 13 13:11:38.031987 master-0 kubenswrapper[28149]: I0313 13:11:38.031920 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-internal-tls-certs\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.032260 master-0 kubenswrapper[28149]: I0313 13:11:38.032178 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-combined-ca-bundle\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.032311 master-0 kubenswrapper[28149]: I0313 13:11:38.032278 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd5k7\" (UniqueName: \"kubernetes.io/projected/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-kube-api-access-bd5k7\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.032408 master-0 kubenswrapper[28149]: I0313 13:11:38.032368 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-httpd-config\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.032487 master-0 kubenswrapper[28149]: I0313 13:11:38.032450 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-config\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.032531 master-0 kubenswrapper[28149]: I0313 13:11:38.032498 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-public-tls-certs\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.032722 master-0 kubenswrapper[28149]: I0313 13:11:38.032699 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-ovndb-tls-certs\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.134996 master-0 kubenswrapper[28149]: I0313 13:11:38.134870 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-httpd-config\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.134996 master-0 kubenswrapper[28149]: I0313 13:11:38.134991 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-config\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.135327 master-0 kubenswrapper[28149]: I0313 13:11:38.135025 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-public-tls-certs\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.135327 master-0 kubenswrapper[28149]: I0313 13:11:38.135091 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-ovndb-tls-certs\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.135327 master-0 kubenswrapper[28149]: I0313 13:11:38.135176 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-internal-tls-certs\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.135327 master-0 kubenswrapper[28149]: I0313 13:11:38.135302 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-combined-ca-bundle\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.135469 master-0 kubenswrapper[28149]: I0313 13:11:38.135360 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd5k7\" (UniqueName: \"kubernetes.io/projected/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-kube-api-access-bd5k7\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.148200 master-0 kubenswrapper[28149]: I0313 13:11:38.143474 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-config\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.148200 master-0 kubenswrapper[28149]: I0313 13:11:38.144739 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-public-tls-certs\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.161367 master-0 kubenswrapper[28149]: I0313 13:11:38.156947 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-ovndb-tls-certs\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.173043 master-0 kubenswrapper[28149]: I0313 13:11:38.170314 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-internal-tls-certs\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.177213 master-0 kubenswrapper[28149]: I0313 13:11:38.172372 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-httpd-config\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.180048 master-0 kubenswrapper[28149]: I0313 13:11:38.179500 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-combined-ca-bundle\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.196784 master-0 kubenswrapper[28149]: I0313 13:11:38.195654 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd5k7\" (UniqueName: \"kubernetes.io/projected/7ea810e1-4c65-410e-a8b1-6f7d0f437ab8-kube-api-access-bd5k7\") pod \"neutron-78f868d9fc-8d9cf\" (UID: \"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8\") " pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.305831 master-0 kubenswrapper[28149]: I0313 13:11:38.305761 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:38.602237 master-0 kubenswrapper[28149]: I0313 13:11:38.602105 28149 generic.go:334] "Generic (PLEG): container finished" podID="95eb9b96-2f27-4701-b62d-b7026cb009ec" containerID="46a940edb344b76f7a7fc8d09b3e0ad7820cc5058ee1f1ba7ab6eab240f4b559" exitCode=0 Mar 13 13:11:38.602237 master-0 kubenswrapper[28149]: I0313 13:11:38.602180 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-db-sync-tv65l" event={"ID":"95eb9b96-2f27-4701-b62d-b7026cb009ec","Type":"ContainerDied","Data":"46a940edb344b76f7a7fc8d09b3e0ad7820cc5058ee1f1ba7ab6eab240f4b559"} Mar 13 13:11:39.108190 master-0 kubenswrapper[28149]: W0313 13:11:39.107836 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea810e1_4c65_410e_a8b1_6f7d0f437ab8.slice/crio-52ed66b488bcf1cdd4c7521a421d74df472c0d0b07304d0421eb421dab784503 WatchSource:0}: Error finding container 52ed66b488bcf1cdd4c7521a421d74df472c0d0b07304d0421eb421dab784503: Status 404 returned error can't find the container with id 52ed66b488bcf1cdd4c7521a421d74df472c0d0b07304d0421eb421dab784503 Mar 13 13:11:39.110867 master-0 kubenswrapper[28149]: I0313 13:11:39.108954 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78f868d9fc-8d9cf"] Mar 13 13:11:39.615873 master-0 kubenswrapper[28149]: I0313 13:11:39.615793 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78f868d9fc-8d9cf" event={"ID":"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8","Type":"ContainerStarted","Data":"683efaf09e8da3a816b3d6088f67d31e9026e20463a1922abf6cebfb902fa20b"} Mar 13 13:11:39.615873 master-0 kubenswrapper[28149]: I0313 13:11:39.615863 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78f868d9fc-8d9cf" event={"ID":"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8","Type":"ContainerStarted","Data":"52ed66b488bcf1cdd4c7521a421d74df472c0d0b07304d0421eb421dab784503"} Mar 13 13:11:40.228384 master-0 kubenswrapper[28149]: I0313 13:11:40.228061 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:11:40.359296 master-0 kubenswrapper[28149]: I0313 13:11:40.359196 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-combined-ca-bundle\") pod \"95eb9b96-2f27-4701-b62d-b7026cb009ec\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " Mar 13 13:11:40.360651 master-0 kubenswrapper[28149]: I0313 13:11:40.359527 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-scripts\") pod \"95eb9b96-2f27-4701-b62d-b7026cb009ec\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " Mar 13 13:11:40.360651 master-0 kubenswrapper[28149]: I0313 13:11:40.359570 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-config-data\") pod \"95eb9b96-2f27-4701-b62d-b7026cb009ec\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " Mar 13 13:11:40.360651 master-0 kubenswrapper[28149]: I0313 13:11:40.359622 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95eb9b96-2f27-4701-b62d-b7026cb009ec-etc-machine-id\") pod \"95eb9b96-2f27-4701-b62d-b7026cb009ec\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " Mar 13 13:11:40.360651 master-0 kubenswrapper[28149]: I0313 13:11:40.359648 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n54kt\" (UniqueName: \"kubernetes.io/projected/95eb9b96-2f27-4701-b62d-b7026cb009ec-kube-api-access-n54kt\") pod \"95eb9b96-2f27-4701-b62d-b7026cb009ec\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " Mar 13 13:11:40.360651 master-0 kubenswrapper[28149]: I0313 13:11:40.359776 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-db-sync-config-data\") pod \"95eb9b96-2f27-4701-b62d-b7026cb009ec\" (UID: \"95eb9b96-2f27-4701-b62d-b7026cb009ec\") " Mar 13 13:11:40.360651 master-0 kubenswrapper[28149]: I0313 13:11:40.359801 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95eb9b96-2f27-4701-b62d-b7026cb009ec-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "95eb9b96-2f27-4701-b62d-b7026cb009ec" (UID: "95eb9b96-2f27-4701-b62d-b7026cb009ec"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:11:40.360651 master-0 kubenswrapper[28149]: I0313 13:11:40.360352 28149 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95eb9b96-2f27-4701-b62d-b7026cb009ec-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:40.364006 master-0 kubenswrapper[28149]: I0313 13:11:40.363895 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95eb9b96-2f27-4701-b62d-b7026cb009ec-kube-api-access-n54kt" (OuterVolumeSpecName: "kube-api-access-n54kt") pod "95eb9b96-2f27-4701-b62d-b7026cb009ec" (UID: "95eb9b96-2f27-4701-b62d-b7026cb009ec"). InnerVolumeSpecName "kube-api-access-n54kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:11:40.366417 master-0 kubenswrapper[28149]: I0313 13:11:40.366352 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-scripts" (OuterVolumeSpecName: "scripts") pod "95eb9b96-2f27-4701-b62d-b7026cb009ec" (UID: "95eb9b96-2f27-4701-b62d-b7026cb009ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:40.367536 master-0 kubenswrapper[28149]: I0313 13:11:40.367459 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "95eb9b96-2f27-4701-b62d-b7026cb009ec" (UID: "95eb9b96-2f27-4701-b62d-b7026cb009ec"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:40.411262 master-0 kubenswrapper[28149]: I0313 13:11:40.410255 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95eb9b96-2f27-4701-b62d-b7026cb009ec" (UID: "95eb9b96-2f27-4701-b62d-b7026cb009ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:40.430300 master-0 kubenswrapper[28149]: I0313 13:11:40.430243 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-config-data" (OuterVolumeSpecName: "config-data") pod "95eb9b96-2f27-4701-b62d-b7026cb009ec" (UID: "95eb9b96-2f27-4701-b62d-b7026cb009ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:40.463087 master-0 kubenswrapper[28149]: I0313 13:11:40.462396 28149 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:40.463087 master-0 kubenswrapper[28149]: I0313 13:11:40.462455 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:40.463087 master-0 kubenswrapper[28149]: I0313 13:11:40.462471 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:40.463087 master-0 kubenswrapper[28149]: I0313 13:11:40.462483 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95eb9b96-2f27-4701-b62d-b7026cb009ec-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:40.463087 master-0 kubenswrapper[28149]: I0313 13:11:40.462498 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n54kt\" (UniqueName: \"kubernetes.io/projected/95eb9b96-2f27-4701-b62d-b7026cb009ec-kube-api-access-n54kt\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:40.628291 master-0 kubenswrapper[28149]: I0313 13:11:40.628221 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78f868d9fc-8d9cf" event={"ID":"7ea810e1-4c65-410e-a8b1-6f7d0f437ab8","Type":"ContainerStarted","Data":"28fa5ab947d02fe365420188a49ec9e2e13a03d870f4f90731ab7c4a5fbe01f7"} Mar 13 13:11:40.628653 master-0 kubenswrapper[28149]: I0313 13:11:40.628406 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:11:40.630875 master-0 kubenswrapper[28149]: I0313 13:11:40.630821 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-db-sync-tv65l" event={"ID":"95eb9b96-2f27-4701-b62d-b7026cb009ec","Type":"ContainerDied","Data":"468158cb2ff12896cce5ce56f27fbf1a572294755b65eeb0a543800828f8eec8"} Mar 13 13:11:40.630875 master-0 kubenswrapper[28149]: I0313 13:11:40.630871 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="468158cb2ff12896cce5ce56f27fbf1a572294755b65eeb0a543800828f8eec8" Mar 13 13:11:40.631175 master-0 kubenswrapper[28149]: I0313 13:11:40.630937 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-db-sync-tv65l" Mar 13 13:11:40.717164 master-0 kubenswrapper[28149]: I0313 13:11:40.715351 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-78f868d9fc-8d9cf" podStartSLOduration=3.715322234 podStartE2EDuration="3.715322234s" podCreationTimestamp="2026-03-13 13:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:11:40.671189985 +0000 UTC m=+1074.324655144" watchObservedRunningTime="2026-03-13 13:11:40.715322234 +0000 UTC m=+1074.368787393" Mar 13 13:11:41.207671 master-0 kubenswrapper[28149]: I0313 13:11:41.204493 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ee0a2-scheduler-0"] Mar 13 13:11:41.207671 master-0 kubenswrapper[28149]: E0313 13:11:41.205152 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95eb9b96-2f27-4701-b62d-b7026cb009ec" containerName="cinder-ee0a2-db-sync" Mar 13 13:11:41.207671 master-0 kubenswrapper[28149]: I0313 13:11:41.205168 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="95eb9b96-2f27-4701-b62d-b7026cb009ec" containerName="cinder-ee0a2-db-sync" Mar 13 13:11:41.207671 master-0 kubenswrapper[28149]: I0313 13:11:41.205513 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="95eb9b96-2f27-4701-b62d-b7026cb009ec" containerName="cinder-ee0a2-db-sync" Mar 13 13:11:41.230377 master-0 kubenswrapper[28149]: I0313 13:11:41.229684 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.235232 master-0 kubenswrapper[28149]: I0313 13:11:41.233506 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ee0a2-scheduler-config-data" Mar 13 13:11:41.235232 master-0 kubenswrapper[28149]: I0313 13:11:41.234953 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ee0a2-scripts" Mar 13 13:11:41.240436 master-0 kubenswrapper[28149]: I0313 13:11:41.240361 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ee0a2-config-data" Mar 13 13:11:41.257178 master-0 kubenswrapper[28149]: I0313 13:11:41.253009 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-scheduler-0"] Mar 13 13:11:41.317171 master-0 kubenswrapper[28149]: I0313 13:11:41.314061 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf64fcfbc-mg9tl"] Mar 13 13:11:41.317171 master-0 kubenswrapper[28149]: I0313 13:11:41.314403 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" podUID="f4d26441-5029-4cf0-9ef3-cba4ed2390e2" containerName="dnsmasq-dns" containerID="cri-o://1c1c82f32a88e93bb5193996990824a47c55ca0804e2af53e07b65c6c5e5849a" gracePeriod=10 Mar 13 13:11:41.327165 master-0 kubenswrapper[28149]: I0313 13:11:41.323181 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:41.339202 master-0 kubenswrapper[28149]: I0313 13:11:41.337267 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f9957b47c-swh76"] Mar 13 13:11:41.339446 master-0 kubenswrapper[28149]: I0313 13:11:41.339217 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.358168 master-0 kubenswrapper[28149]: I0313 13:11:41.355193 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c7dd334-6af4-4528-9a21-d51e946a555b-etc-machine-id\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.358168 master-0 kubenswrapper[28149]: I0313 13:11:41.355377 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-config-data-custom\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.358168 master-0 kubenswrapper[28149]: I0313 13:11:41.355418 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-config-data\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.358168 master-0 kubenswrapper[28149]: I0313 13:11:41.355451 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-combined-ca-bundle\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.358168 master-0 kubenswrapper[28149]: I0313 13:11:41.355534 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rjqn\" (UniqueName: \"kubernetes.io/projected/8c7dd334-6af4-4528-9a21-d51e946a555b-kube-api-access-2rjqn\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.358168 master-0 kubenswrapper[28149]: I0313 13:11:41.355564 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-scripts\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.469226 master-0 kubenswrapper[28149]: I0313 13:11:41.468172 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rjqn\" (UniqueName: \"kubernetes.io/projected/8c7dd334-6af4-4528-9a21-d51e946a555b-kube-api-access-2rjqn\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.469226 master-0 kubenswrapper[28149]: I0313 13:11:41.468269 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-scripts\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.486167 master-0 kubenswrapper[28149]: I0313 13:11:41.483871 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f9957b47c-swh76"] Mar 13 13:11:41.502211 master-0 kubenswrapper[28149]: I0313 13:11:41.486434 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c7dd334-6af4-4528-9a21-d51e946a555b-etc-machine-id\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.502211 master-0 kubenswrapper[28149]: I0313 13:11:41.486779 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-ovsdbserver-sb\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.502211 master-0 kubenswrapper[28149]: I0313 13:11:41.486907 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-dns-svc\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.502211 master-0 kubenswrapper[28149]: I0313 13:11:41.487012 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-config\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.502211 master-0 kubenswrapper[28149]: I0313 13:11:41.487064 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-config-data-custom\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.502211 master-0 kubenswrapper[28149]: I0313 13:11:41.487098 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-dns-swift-storage-0\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.502211 master-0 kubenswrapper[28149]: I0313 13:11:41.487188 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-config-data\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.502211 master-0 kubenswrapper[28149]: I0313 13:11:41.487256 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-combined-ca-bundle\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.502809 master-0 kubenswrapper[28149]: I0313 13:11:41.502267 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-ovsdbserver-nb\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.502809 master-0 kubenswrapper[28149]: I0313 13:11:41.502353 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p64x9\" (UniqueName: \"kubernetes.io/projected/1cb05e83-a753-4f24-b578-d7b8996d39b7-kube-api-access-p64x9\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.502809 master-0 kubenswrapper[28149]: I0313 13:11:41.488869 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c7dd334-6af4-4528-9a21-d51e946a555b-etc-machine-id\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.502809 master-0 kubenswrapper[28149]: I0313 13:11:41.501441 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-config-data-custom\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.531980 master-0 kubenswrapper[28149]: I0313 13:11:41.530331 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-combined-ca-bundle\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.533873 master-0 kubenswrapper[28149]: I0313 13:11:41.533812 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-scripts\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.534651 master-0 kubenswrapper[28149]: I0313 13:11:41.534498 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-config-data\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.550170 master-0 kubenswrapper[28149]: I0313 13:11:41.546556 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rjqn\" (UniqueName: \"kubernetes.io/projected/8c7dd334-6af4-4528-9a21-d51e946a555b-kube-api-access-2rjqn\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.582305 master-0 kubenswrapper[28149]: I0313 13:11:41.582246 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ee0a2-volume-lvm-iscsi-0"] Mar 13 13:11:41.586091 master-0 kubenswrapper[28149]: I0313 13:11:41.586012 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.591682 master-0 kubenswrapper[28149]: I0313 13:11:41.591630 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:41.601525 master-0 kubenswrapper[28149]: I0313 13:11:41.600638 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ee0a2-volume-lvm-iscsi-config-data" Mar 13 13:11:41.606785 master-0 kubenswrapper[28149]: I0313 13:11:41.606687 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-config\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.607125 master-0 kubenswrapper[28149]: I0313 13:11:41.607103 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-dns-swift-storage-0\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.607406 master-0 kubenswrapper[28149]: I0313 13:11:41.607384 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-ovsdbserver-nb\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.607593 master-0 kubenswrapper[28149]: I0313 13:11:41.607557 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p64x9\" (UniqueName: \"kubernetes.io/projected/1cb05e83-a753-4f24-b578-d7b8996d39b7-kube-api-access-p64x9\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.608068 master-0 kubenswrapper[28149]: I0313 13:11:41.608039 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-ovsdbserver-sb\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.608311 master-0 kubenswrapper[28149]: I0313 13:11:41.608289 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-dns-svc\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.609839 master-0 kubenswrapper[28149]: I0313 13:11:41.609813 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-dns-svc\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.610003 master-0 kubenswrapper[28149]: I0313 13:11:41.609955 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-ovsdbserver-nb\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.610809 master-0 kubenswrapper[28149]: I0313 13:11:41.610755 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-config\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.611052 master-0 kubenswrapper[28149]: I0313 13:11:41.611026 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-ovsdbserver-sb\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.612186 master-0 kubenswrapper[28149]: I0313 13:11:41.611650 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-dns-swift-storage-0\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.647857 master-0 kubenswrapper[28149]: I0313 13:11:41.645387 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ee0a2-backup-0"] Mar 13 13:11:41.691165 master-0 kubenswrapper[28149]: I0313 13:11:41.680810 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p64x9\" (UniqueName: \"kubernetes.io/projected/1cb05e83-a753-4f24-b578-d7b8996d39b7-kube-api-access-p64x9\") pod \"dnsmasq-dns-f9957b47c-swh76\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.691165 master-0 kubenswrapper[28149]: I0313 13:11:41.683959 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:41.723549 master-0 kubenswrapper[28149]: I0313 13:11:41.721768 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-run\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.723549 master-0 kubenswrapper[28149]: I0313 13:11:41.721858 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-nvme\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.723549 master-0 kubenswrapper[28149]: I0313 13:11:41.721920 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8j68\" (UniqueName: \"kubernetes.io/projected/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-kube-api-access-j8j68\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.723549 master-0 kubenswrapper[28149]: I0313 13:11:41.722010 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-config-data-custom\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.723549 master-0 kubenswrapper[28149]: I0313 13:11:41.722078 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-locks-brick\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.723549 master-0 kubenswrapper[28149]: I0313 13:11:41.722179 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-machine-id\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.723549 master-0 kubenswrapper[28149]: I0313 13:11:41.722241 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-combined-ca-bundle\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.723549 master-0 kubenswrapper[28149]: I0313 13:11:41.722331 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-locks-cinder\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.723549 master-0 kubenswrapper[28149]: I0313 13:11:41.722408 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-iscsi\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.723549 master-0 kubenswrapper[28149]: I0313 13:11:41.722618 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-scripts\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.723549 master-0 kubenswrapper[28149]: I0313 13:11:41.722699 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-config-data\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.723549 master-0 kubenswrapper[28149]: I0313 13:11:41.722798 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-dev\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.723549 master-0 kubenswrapper[28149]: I0313 13:11:41.722828 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-lib-cinder\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.723549 master-0 kubenswrapper[28149]: I0313 13:11:41.722888 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-lib-modules\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.723549 master-0 kubenswrapper[28149]: I0313 13:11:41.722979 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-sys\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.744974 master-0 kubenswrapper[28149]: I0313 13:11:41.742245 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-volume-lvm-iscsi-0"] Mar 13 13:11:41.744974 master-0 kubenswrapper[28149]: I0313 13:11:41.744612 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.771168 master-0 kubenswrapper[28149]: I0313 13:11:41.768874 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ee0a2-backup-config-data" Mar 13 13:11:41.787056 master-0 kubenswrapper[28149]: I0313 13:11:41.779648 28149 generic.go:334] "Generic (PLEG): container finished" podID="f4d26441-5029-4cf0-9ef3-cba4ed2390e2" containerID="1c1c82f32a88e93bb5193996990824a47c55ca0804e2af53e07b65c6c5e5849a" exitCode=0 Mar 13 13:11:41.787056 master-0 kubenswrapper[28149]: I0313 13:11:41.779901 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" event={"ID":"f4d26441-5029-4cf0-9ef3-cba4ed2390e2","Type":"ContainerDied","Data":"1c1c82f32a88e93bb5193996990824a47c55ca0804e2af53e07b65c6c5e5849a"} Mar 13 13:11:41.823402 master-0 kubenswrapper[28149]: I0313 13:11:41.819883 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-backup-0"] Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.825312 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-dev\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.825366 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-lib-cinder\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.825400 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-lib-modules\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.825433 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q5vm\" (UniqueName: \"kubernetes.io/projected/5542dffa-edbf-4133-b7cc-2631121726dc-kube-api-access-9q5vm\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.825471 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-config-data-custom\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.825496 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-lib-modules\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826175 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-sys\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826224 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-locks-cinder\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826279 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-locks-brick\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826340 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-run\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826363 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-nvme\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826380 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8j68\" (UniqueName: \"kubernetes.io/projected/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-kube-api-access-j8j68\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826413 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-config-data-custom\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826459 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-locks-brick\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826504 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-machine-id\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826522 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-combined-ca-bundle\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826542 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-iscsi\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826598 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-nvme\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826648 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-locks-cinder\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826666 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-lib-cinder\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826694 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-iscsi\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826739 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-sys\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826758 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-config-data\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826774 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-run\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826845 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-combined-ca-bundle\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826891 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-scripts\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826919 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-machine-id\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826936 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-scripts\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826953 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-config-data\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.826982 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-dev\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.827497 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-dev\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.827638 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-lib-cinder\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.827663 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-lib-modules\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.828756 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-locks-cinder\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.828851 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-sys\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.829479 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-run\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.829560 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-nvme\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.832167 master-0 kubenswrapper[28149]: I0313 13:11:41.829874 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-iscsi\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.839163 master-0 kubenswrapper[28149]: I0313 13:11:41.836659 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-config-data-custom\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.839163 master-0 kubenswrapper[28149]: I0313 13:11:41.836718 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-machine-id\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.843604 master-0 kubenswrapper[28149]: I0313 13:11:41.843557 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-locks-brick\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.843875 master-0 kubenswrapper[28149]: I0313 13:11:41.843828 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-combined-ca-bundle\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.969674 master-0 kubenswrapper[28149]: I0313 13:11:41.890411 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8j68\" (UniqueName: \"kubernetes.io/projected/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-kube-api-access-j8j68\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.969674 master-0 kubenswrapper[28149]: I0313 13:11:41.967297 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-scripts\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.973162 master-0 kubenswrapper[28149]: I0313 13:11:41.972935 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ee0a2-api-0"] Mar 13 13:11:41.979392 master-0 kubenswrapper[28149]: I0313 13:11:41.974387 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-config-data\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:41.979392 master-0 kubenswrapper[28149]: I0313 13:11:41.977035 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:41.981762 master-0 kubenswrapper[28149]: I0313 13:11:41.981720 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-dev\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.981820 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q5vm\" (UniqueName: \"kubernetes.io/projected/5542dffa-edbf-4133-b7cc-2631121726dc-kube-api-access-9q5vm\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.981934 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-dev\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.982029 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-config-data-custom\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.982784 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-lib-modules\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.982858 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-locks-cinder\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.982907 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-locks-brick\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.983017 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-iscsi\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.983052 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-nvme\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.983075 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-lib-cinder\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.983122 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-sys\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.983153 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-run\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.983171 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-config-data\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.983216 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-combined-ca-bundle\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.983258 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-scripts\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.983275 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-machine-id\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.987307 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-lib-cinder\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.987380 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-lib-modules\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.987606 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-nvme\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.987682 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-iscsi\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.987687 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-locks-cinder\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.987741 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-machine-id\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.987896 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-locks-brick\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.988256 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-run\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.988297 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-sys\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.011021 master-0 kubenswrapper[28149]: I0313 13:11:41.990413 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ee0a2-api-config-data" Mar 13 13:11:42.022017 master-0 kubenswrapper[28149]: I0313 13:11:42.014093 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-api-0"] Mar 13 13:11:42.092407 master-0 kubenswrapper[28149]: I0313 13:11:42.091656 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn4jh\" (UniqueName: \"kubernetes.io/projected/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-kube-api-access-xn4jh\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.092407 master-0 kubenswrapper[28149]: I0313 13:11:42.091839 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-combined-ca-bundle\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.092407 master-0 kubenswrapper[28149]: I0313 13:11:42.091906 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-etc-machine-id\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.092407 master-0 kubenswrapper[28149]: I0313 13:11:42.091935 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-config-data-custom\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.092407 master-0 kubenswrapper[28149]: I0313 13:11:42.092042 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-scripts\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.092407 master-0 kubenswrapper[28149]: I0313 13:11:42.092174 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-logs\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.092407 master-0 kubenswrapper[28149]: I0313 13:11:42.092216 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-config-data\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.165371 master-0 kubenswrapper[28149]: I0313 13:11:42.165107 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-config-data-custom\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.168306 master-0 kubenswrapper[28149]: I0313 13:11:42.168080 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-combined-ca-bundle\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.186161 master-0 kubenswrapper[28149]: I0313 13:11:42.181001 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q5vm\" (UniqueName: \"kubernetes.io/projected/5542dffa-edbf-4133-b7cc-2631121726dc-kube-api-access-9q5vm\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.194717 master-0 kubenswrapper[28149]: I0313 13:11:42.194170 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-combined-ca-bundle\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.194717 master-0 kubenswrapper[28149]: I0313 13:11:42.194248 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-etc-machine-id\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.194717 master-0 kubenswrapper[28149]: I0313 13:11:42.194286 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-config-data-custom\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.194717 master-0 kubenswrapper[28149]: I0313 13:11:42.194393 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-scripts\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.194717 master-0 kubenswrapper[28149]: I0313 13:11:42.194516 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-logs\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.194717 master-0 kubenswrapper[28149]: I0313 13:11:42.194553 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-config-data\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.200196 master-0 kubenswrapper[28149]: I0313 13:11:42.196332 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-etc-machine-id\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.204156 master-0 kubenswrapper[28149]: I0313 13:11:42.202340 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-config-data\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.213243 master-0 kubenswrapper[28149]: I0313 13:11:42.213067 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-config-data\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.215693 master-0 kubenswrapper[28149]: I0313 13:11:42.214357 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn4jh\" (UniqueName: \"kubernetes.io/projected/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-kube-api-access-xn4jh\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.215693 master-0 kubenswrapper[28149]: I0313 13:11:42.214631 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-logs\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.227159 master-0 kubenswrapper[28149]: I0313 13:11:42.223994 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-scripts\") pod \"cinder-ee0a2-backup-0\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.231323 master-0 kubenswrapper[28149]: I0313 13:11:42.229959 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-config-data-custom\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.240870 master-0 kubenswrapper[28149]: I0313 13:11:42.240750 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-scripts\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.241194 master-0 kubenswrapper[28149]: I0313 13:11:42.240938 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-combined-ca-bundle\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.244404 master-0 kubenswrapper[28149]: I0313 13:11:42.241344 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn4jh\" (UniqueName: \"kubernetes.io/projected/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-kube-api-access-xn4jh\") pod \"cinder-ee0a2-api-0\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.252900 master-0 kubenswrapper[28149]: I0313 13:11:42.252854 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:42.287169 master-0 kubenswrapper[28149]: I0313 13:11:42.287065 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:42.327056 master-0 kubenswrapper[28149]: I0313 13:11:42.327008 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:42.822508 master-0 kubenswrapper[28149]: I0313 13:11:42.822432 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:42.833073 master-0 kubenswrapper[28149]: I0313 13:11:42.832637 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" event={"ID":"f4d26441-5029-4cf0-9ef3-cba4ed2390e2","Type":"ContainerDied","Data":"7be309b8464825f7be6066ab4972a4af21ae91870f46f8734f71e7024db08c46"} Mar 13 13:11:42.833073 master-0 kubenswrapper[28149]: I0313 13:11:42.832707 28149 scope.go:117] "RemoveContainer" containerID="1c1c82f32a88e93bb5193996990824a47c55ca0804e2af53e07b65c6c5e5849a" Mar 13 13:11:42.899621 master-0 kubenswrapper[28149]: I0313 13:11:42.899475 28149 scope.go:117] "RemoveContainer" containerID="6ec533589ef5035aea6ec3a82dbb8b38fc7e1f6902182907d154108f40b1c4fb" Mar 13 13:11:42.939634 master-0 kubenswrapper[28149]: I0313 13:11:42.936251 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cqrb\" (UniqueName: \"kubernetes.io/projected/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-kube-api-access-6cqrb\") pod \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " Mar 13 13:11:42.939634 master-0 kubenswrapper[28149]: I0313 13:11:42.936344 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-config\") pod \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " Mar 13 13:11:42.939634 master-0 kubenswrapper[28149]: I0313 13:11:42.936454 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-ovsdbserver-nb\") pod \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " Mar 13 13:11:42.939634 master-0 kubenswrapper[28149]: I0313 13:11:42.937496 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-dns-svc\") pod \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " Mar 13 13:11:42.939634 master-0 kubenswrapper[28149]: I0313 13:11:42.937565 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-ovsdbserver-sb\") pod \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " Mar 13 13:11:42.939634 master-0 kubenswrapper[28149]: I0313 13:11:42.937787 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-dns-swift-storage-0\") pod \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\" (UID: \"f4d26441-5029-4cf0-9ef3-cba4ed2390e2\") " Mar 13 13:11:42.953478 master-0 kubenswrapper[28149]: I0313 13:11:42.951449 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-kube-api-access-6cqrb" (OuterVolumeSpecName: "kube-api-access-6cqrb") pod "f4d26441-5029-4cf0-9ef3-cba4ed2390e2" (UID: "f4d26441-5029-4cf0-9ef3-cba4ed2390e2"). InnerVolumeSpecName "kube-api-access-6cqrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:11:43.017749 master-0 kubenswrapper[28149]: I0313 13:11:43.017682 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f4d26441-5029-4cf0-9ef3-cba4ed2390e2" (UID: "f4d26441-5029-4cf0-9ef3-cba4ed2390e2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:11:43.055546 master-0 kubenswrapper[28149]: I0313 13:11:43.055037 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cqrb\" (UniqueName: \"kubernetes.io/projected/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-kube-api-access-6cqrb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:43.055546 master-0 kubenswrapper[28149]: I0313 13:11:43.055075 28149 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:43.058131 master-0 kubenswrapper[28149]: I0313 13:11:43.057648 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f4d26441-5029-4cf0-9ef3-cba4ed2390e2" (UID: "f4d26441-5029-4cf0-9ef3-cba4ed2390e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:11:43.113712 master-0 kubenswrapper[28149]: I0313 13:11:43.113648 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-scheduler-0"] Mar 13 13:11:43.118451 master-0 kubenswrapper[28149]: W0313 13:11:43.118376 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c7dd334_6af4_4528_9a21_d51e946a555b.slice/crio-c8c26f8b0a103db78db5b6d06aa5731da2cc96bac7f7d349aa25689ba78a8daf WatchSource:0}: Error finding container c8c26f8b0a103db78db5b6d06aa5731da2cc96bac7f7d349aa25689ba78a8daf: Status 404 returned error can't find the container with id c8c26f8b0a103db78db5b6d06aa5731da2cc96bac7f7d349aa25689ba78a8daf Mar 13 13:11:43.124451 master-0 kubenswrapper[28149]: I0313 13:11:43.124358 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f4d26441-5029-4cf0-9ef3-cba4ed2390e2" (UID: "f4d26441-5029-4cf0-9ef3-cba4ed2390e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:11:43.132225 master-0 kubenswrapper[28149]: I0313 13:11:43.130680 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f9957b47c-swh76"] Mar 13 13:11:43.164088 master-0 kubenswrapper[28149]: I0313 13:11:43.164016 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-config" (OuterVolumeSpecName: "config") pod "f4d26441-5029-4cf0-9ef3-cba4ed2390e2" (UID: "f4d26441-5029-4cf0-9ef3-cba4ed2390e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:11:43.166608 master-0 kubenswrapper[28149]: I0313 13:11:43.166539 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:43.166776 master-0 kubenswrapper[28149]: I0313 13:11:43.166711 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:43.166883 master-0 kubenswrapper[28149]: I0313 13:11:43.166859 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:43.257372 master-0 kubenswrapper[28149]: I0313 13:11:43.256972 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4d26441-5029-4cf0-9ef3-cba4ed2390e2" (UID: "f4d26441-5029-4cf0-9ef3-cba4ed2390e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:11:43.272279 master-0 kubenswrapper[28149]: I0313 13:11:43.270313 28149 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4d26441-5029-4cf0-9ef3-cba4ed2390e2-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:43.406559 master-0 kubenswrapper[28149]: I0313 13:11:43.400785 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-volume-lvm-iscsi-0"] Mar 13 13:11:43.458337 master-0 kubenswrapper[28149]: W0313 13:11:43.456735 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda08dce85_d5c4_44e4_a3b0_e404c53b62f2.slice/crio-d71a32d6e9f43a8aafe5e50b1c5491f10c5a6b159510944541ceeef269227306 WatchSource:0}: Error finding container d71a32d6e9f43a8aafe5e50b1c5491f10c5a6b159510944541ceeef269227306: Status 404 returned error can't find the container with id d71a32d6e9f43a8aafe5e50b1c5491f10c5a6b159510944541ceeef269227306 Mar 13 13:11:43.540894 master-0 kubenswrapper[28149]: I0313 13:11:43.540374 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-backup-0"] Mar 13 13:11:43.732279 master-0 kubenswrapper[28149]: I0313 13:11:43.732111 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-api-0"] Mar 13 13:11:43.890018 master-0 kubenswrapper[28149]: I0313 13:11:43.889865 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" event={"ID":"a08dce85-d5c4-44e4-a3b0-e404c53b62f2","Type":"ContainerStarted","Data":"d71a32d6e9f43a8aafe5e50b1c5491f10c5a6b159510944541ceeef269227306"} Mar 13 13:11:43.904727 master-0 kubenswrapper[28149]: I0313 13:11:43.896673 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f9957b47c-swh76" event={"ID":"1cb05e83-a753-4f24-b578-d7b8996d39b7","Type":"ContainerStarted","Data":"2df5670ee4f92554c68eaff9789f5bde338b7233a3c34e4948d3e41245ce2405"} Mar 13 13:11:43.930790 master-0 kubenswrapper[28149]: I0313 13:11:43.930728 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf64fcfbc-mg9tl" Mar 13 13:11:43.942365 master-0 kubenswrapper[28149]: I0313 13:11:43.941057 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-backup-0" event={"ID":"5542dffa-edbf-4133-b7cc-2631121726dc","Type":"ContainerStarted","Data":"6080e0f6c170fbe145f0f4df0fdc2681a4dafe0493c86efc6339e21cb7c4b3bc"} Mar 13 13:11:43.945207 master-0 kubenswrapper[28149]: I0313 13:11:43.945076 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-scheduler-0" event={"ID":"8c7dd334-6af4-4528-9a21-d51e946a555b","Type":"ContainerStarted","Data":"c8c26f8b0a103db78db5b6d06aa5731da2cc96bac7f7d349aa25689ba78a8daf"} Mar 13 13:11:44.027792 master-0 kubenswrapper[28149]: I0313 13:11:44.025226 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf64fcfbc-mg9tl"] Mar 13 13:11:44.046821 master-0 kubenswrapper[28149]: I0313 13:11:44.040159 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cf64fcfbc-mg9tl"] Mar 13 13:11:44.783784 master-0 kubenswrapper[28149]: I0313 13:11:44.783717 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4d26441-5029-4cf0-9ef3-cba4ed2390e2" path="/var/lib/kubelet/pods/f4d26441-5029-4cf0-9ef3-cba4ed2390e2/volumes" Mar 13 13:11:44.975708 master-0 kubenswrapper[28149]: I0313 13:11:44.975636 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-api-0" event={"ID":"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd","Type":"ContainerStarted","Data":"1917d95354f3d581e3f468fb891eba59f828caa62e7b1e3c64e15f16aac33b17"} Mar 13 13:11:44.981874 master-0 kubenswrapper[28149]: I0313 13:11:44.981819 28149 generic.go:334] "Generic (PLEG): container finished" podID="1cb05e83-a753-4f24-b578-d7b8996d39b7" containerID="20262da21e1a31039d8959e8d536a7f86da3b9256937f0a84f87e44540289318" exitCode=0 Mar 13 13:11:44.982042 master-0 kubenswrapper[28149]: I0313 13:11:44.981879 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f9957b47c-swh76" event={"ID":"1cb05e83-a753-4f24-b578-d7b8996d39b7","Type":"ContainerDied","Data":"20262da21e1a31039d8959e8d536a7f86da3b9256937f0a84f87e44540289318"} Mar 13 13:11:45.348380 master-0 kubenswrapper[28149]: I0313 13:11:45.343594 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ee0a2-api-0"] Mar 13 13:11:46.003991 master-0 kubenswrapper[28149]: I0313 13:11:46.003905 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-api-0" event={"ID":"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd","Type":"ContainerStarted","Data":"ef9db93a43a4bcf2a58d8eaf7c496a10a9961fb641f216e5b2f84182384e6287"} Mar 13 13:11:46.014828 master-0 kubenswrapper[28149]: I0313 13:11:46.014642 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" event={"ID":"a08dce85-d5c4-44e4-a3b0-e404c53b62f2","Type":"ContainerStarted","Data":"9526c59f3a2cba871431711a9cbbc5eb3ada1bdd4f300b13618a2aaf534ea349"} Mar 13 13:11:46.051257 master-0 kubenswrapper[28149]: I0313 13:11:46.051198 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f9957b47c-swh76" event={"ID":"1cb05e83-a753-4f24-b578-d7b8996d39b7","Type":"ContainerStarted","Data":"65e50548c2a68ea8d377a853fa87803a8ba573510bcf5bbc4b25f130ca9f0d8b"} Mar 13 13:11:46.053108 master-0 kubenswrapper[28149]: I0313 13:11:46.053075 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:46.058858 master-0 kubenswrapper[28149]: I0313 13:11:46.058779 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-backup-0" event={"ID":"5542dffa-edbf-4133-b7cc-2631121726dc","Type":"ContainerStarted","Data":"727d8e30c5ed7c69b55a98f2363e3c8df4840dde2b23de1158ff5b34eb4d3617"} Mar 13 13:11:46.061481 master-0 kubenswrapper[28149]: I0313 13:11:46.061434 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-scheduler-0" event={"ID":"8c7dd334-6af4-4528-9a21-d51e946a555b","Type":"ContainerStarted","Data":"3cb0d27e3219bc2532b415baa34498fa0a4a28387194a658dc55600c976399d5"} Mar 13 13:11:46.760182 master-0 kubenswrapper[28149]: I0313 13:11:46.759322 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f9957b47c-swh76" podStartSLOduration=5.759286534 podStartE2EDuration="5.759286534s" podCreationTimestamp="2026-03-13 13:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:11:46.092020151 +0000 UTC m=+1079.745485310" watchObservedRunningTime="2026-03-13 13:11:46.759286534 +0000 UTC m=+1080.412751703" Mar 13 13:11:47.098174 master-0 kubenswrapper[28149]: I0313 13:11:47.094280 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" event={"ID":"a08dce85-d5c4-44e4-a3b0-e404c53b62f2","Type":"ContainerStarted","Data":"7bb569460a6f2eb1cef8e8cce8c284a41fd3b9d31bee3e4adf73f698d6c89770"} Mar 13 13:11:47.108740 master-0 kubenswrapper[28149]: I0313 13:11:47.107765 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-backup-0" event={"ID":"5542dffa-edbf-4133-b7cc-2631121726dc","Type":"ContainerStarted","Data":"8d866a757c903f73361a05a85d828507b5d29c24c685ef09179cf6eb95a3969f"} Mar 13 13:11:47.128167 master-0 kubenswrapper[28149]: I0313 13:11:47.124144 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-scheduler-0" event={"ID":"8c7dd334-6af4-4528-9a21-d51e946a555b","Type":"ContainerStarted","Data":"6c3428e0b54511966213f010f1f12cc38d9cc7f5452604763576c7d35c1098f8"} Mar 13 13:11:47.159096 master-0 kubenswrapper[28149]: I0313 13:11:47.156912 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" podStartSLOduration=4.256252978 podStartE2EDuration="6.156887994s" podCreationTimestamp="2026-03-13 13:11:41 +0000 UTC" firstStartedPulling="2026-03-13 13:11:43.460794466 +0000 UTC m=+1077.114259625" lastFinishedPulling="2026-03-13 13:11:45.361429482 +0000 UTC m=+1079.014894641" observedRunningTime="2026-03-13 13:11:47.138757466 +0000 UTC m=+1080.792222625" watchObservedRunningTime="2026-03-13 13:11:47.156887994 +0000 UTC m=+1080.810353163" Mar 13 13:11:47.217502 master-0 kubenswrapper[28149]: I0313 13:11:47.217393 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ee0a2-backup-0" podStartSLOduration=4.473015287 podStartE2EDuration="6.217362593s" podCreationTimestamp="2026-03-13 13:11:41 +0000 UTC" firstStartedPulling="2026-03-13 13:11:43.546978928 +0000 UTC m=+1077.200444087" lastFinishedPulling="2026-03-13 13:11:45.291326244 +0000 UTC m=+1078.944791393" observedRunningTime="2026-03-13 13:11:47.189075721 +0000 UTC m=+1080.842540880" watchObservedRunningTime="2026-03-13 13:11:47.217362593 +0000 UTC m=+1080.870827752" Mar 13 13:11:47.229871 master-0 kubenswrapper[28149]: I0313 13:11:47.229392 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ee0a2-scheduler-0" podStartSLOduration=4.811037972 podStartE2EDuration="6.229348666s" podCreationTimestamp="2026-03-13 13:11:41 +0000 UTC" firstStartedPulling="2026-03-13 13:11:43.126126861 +0000 UTC m=+1076.779592020" lastFinishedPulling="2026-03-13 13:11:44.544437555 +0000 UTC m=+1078.197902714" observedRunningTime="2026-03-13 13:11:47.22057697 +0000 UTC m=+1080.874042119" watchObservedRunningTime="2026-03-13 13:11:47.229348666 +0000 UTC m=+1080.882813835" Mar 13 13:11:47.255423 master-0 kubenswrapper[28149]: I0313 13:11:47.255360 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:47.287956 master-0 kubenswrapper[28149]: I0313 13:11:47.287812 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:48.225632 master-0 kubenswrapper[28149]: I0313 13:11:48.151492 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ee0a2-api-0" podUID="e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" containerName="cinder-ee0a2-api-log" containerID="cri-o://ef9db93a43a4bcf2a58d8eaf7c496a10a9961fb641f216e5b2f84182384e6287" gracePeriod=30 Mar 13 13:11:48.225632 master-0 kubenswrapper[28149]: I0313 13:11:48.151881 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-api-0" event={"ID":"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd","Type":"ContainerStarted","Data":"33c31329400496fed349596fa5f92f2238ea2da5908785e42511946da5e90bb2"} Mar 13 13:11:48.225632 master-0 kubenswrapper[28149]: I0313 13:11:48.155418 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:48.225632 master-0 kubenswrapper[28149]: I0313 13:11:48.155455 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ee0a2-api-0" podUID="e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" containerName="cinder-api" containerID="cri-o://33c31329400496fed349596fa5f92f2238ea2da5908785e42511946da5e90bb2" gracePeriod=30 Mar 13 13:11:48.225632 master-0 kubenswrapper[28149]: I0313 13:11:48.205245 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ee0a2-api-0" podStartSLOduration=7.205224123 podStartE2EDuration="7.205224123s" podCreationTimestamp="2026-03-13 13:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:11:48.201614115 +0000 UTC m=+1081.855079274" watchObservedRunningTime="2026-03-13 13:11:48.205224123 +0000 UTC m=+1081.858689282" Mar 13 13:11:49.162081 master-0 kubenswrapper[28149]: I0313 13:11:49.161539 28149 generic.go:334] "Generic (PLEG): container finished" podID="e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" containerID="33c31329400496fed349596fa5f92f2238ea2da5908785e42511946da5e90bb2" exitCode=0 Mar 13 13:11:49.162081 master-0 kubenswrapper[28149]: I0313 13:11:49.161617 28149 generic.go:334] "Generic (PLEG): container finished" podID="e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" containerID="ef9db93a43a4bcf2a58d8eaf7c496a10a9961fb641f216e5b2f84182384e6287" exitCode=143 Mar 13 13:11:49.162081 master-0 kubenswrapper[28149]: I0313 13:11:49.161713 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-api-0" event={"ID":"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd","Type":"ContainerDied","Data":"33c31329400496fed349596fa5f92f2238ea2da5908785e42511946da5e90bb2"} Mar 13 13:11:49.162081 master-0 kubenswrapper[28149]: I0313 13:11:49.161787 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-api-0" event={"ID":"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd","Type":"ContainerDied","Data":"ef9db93a43a4bcf2a58d8eaf7c496a10a9961fb641f216e5b2f84182384e6287"} Mar 13 13:11:49.162081 master-0 kubenswrapper[28149]: I0313 13:11:49.161805 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-api-0" event={"ID":"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd","Type":"ContainerDied","Data":"1917d95354f3d581e3f468fb891eba59f828caa62e7b1e3c64e15f16aac33b17"} Mar 13 13:11:49.162081 master-0 kubenswrapper[28149]: I0313 13:11:49.161818 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1917d95354f3d581e3f468fb891eba59f828caa62e7b1e3c64e15f16aac33b17" Mar 13 13:11:49.164064 master-0 kubenswrapper[28149]: I0313 13:11:49.164020 28149 generic.go:334] "Generic (PLEG): container finished" podID="0b7e43c1-e19e-4691-a5b4-2a2197764944" containerID="543a1839edafb69db2d7d7f2f4c74576b687c77c31b8c1e238dd942ea7d7c4ba" exitCode=0 Mar 13 13:11:49.165514 master-0 kubenswrapper[28149]: I0313 13:11:49.165477 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-h8h9t" event={"ID":"0b7e43c1-e19e-4691-a5b4-2a2197764944","Type":"ContainerDied","Data":"543a1839edafb69db2d7d7f2f4c74576b687c77c31b8c1e238dd942ea7d7c4ba"} Mar 13 13:11:49.262428 master-0 kubenswrapper[28149]: I0313 13:11:49.261880 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:49.404109 master-0 kubenswrapper[28149]: I0313 13:11:49.403866 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-scripts\") pod \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " Mar 13 13:11:49.404109 master-0 kubenswrapper[28149]: I0313 13:11:49.403979 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-combined-ca-bundle\") pod \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " Mar 13 13:11:49.404109 master-0 kubenswrapper[28149]: I0313 13:11:49.404070 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-etc-machine-id\") pod \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " Mar 13 13:11:49.404498 master-0 kubenswrapper[28149]: I0313 13:11:49.404159 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-config-data-custom\") pod \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " Mar 13 13:11:49.404498 master-0 kubenswrapper[28149]: I0313 13:11:49.404340 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-config-data\") pod \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " Mar 13 13:11:49.404498 master-0 kubenswrapper[28149]: I0313 13:11:49.404389 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn4jh\" (UniqueName: \"kubernetes.io/projected/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-kube-api-access-xn4jh\") pod \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " Mar 13 13:11:49.404498 master-0 kubenswrapper[28149]: I0313 13:11:49.404444 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-logs\") pod \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\" (UID: \"e4f641ac-5f11-4ad6-b2d1-430b1fef36dd\") " Mar 13 13:11:49.405299 master-0 kubenswrapper[28149]: I0313 13:11:49.405062 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" (UID: "e4f641ac-5f11-4ad6-b2d1-430b1fef36dd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:11:49.406172 master-0 kubenswrapper[28149]: I0313 13:11:49.405873 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-logs" (OuterVolumeSpecName: "logs") pod "e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" (UID: "e4f641ac-5f11-4ad6-b2d1-430b1fef36dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:11:49.406261 master-0 kubenswrapper[28149]: I0313 13:11:49.406188 28149 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:49.406261 master-0 kubenswrapper[28149]: I0313 13:11:49.406204 28149 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:49.411212 master-0 kubenswrapper[28149]: I0313 13:11:49.411042 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-kube-api-access-xn4jh" (OuterVolumeSpecName: "kube-api-access-xn4jh") pod "e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" (UID: "e4f641ac-5f11-4ad6-b2d1-430b1fef36dd"). InnerVolumeSpecName "kube-api-access-xn4jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:11:49.411963 master-0 kubenswrapper[28149]: I0313 13:11:49.411831 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" (UID: "e4f641ac-5f11-4ad6-b2d1-430b1fef36dd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:49.426877 master-0 kubenswrapper[28149]: I0313 13:11:49.426817 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-scripts" (OuterVolumeSpecName: "scripts") pod "e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" (UID: "e4f641ac-5f11-4ad6-b2d1-430b1fef36dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:49.436309 master-0 kubenswrapper[28149]: I0313 13:11:49.435975 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" (UID: "e4f641ac-5f11-4ad6-b2d1-430b1fef36dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:49.468398 master-0 kubenswrapper[28149]: I0313 13:11:49.467772 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-config-data" (OuterVolumeSpecName: "config-data") pod "e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" (UID: "e4f641ac-5f11-4ad6-b2d1-430b1fef36dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:49.508451 master-0 kubenswrapper[28149]: I0313 13:11:49.508089 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:49.508451 master-0 kubenswrapper[28149]: I0313 13:11:49.508154 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn4jh\" (UniqueName: \"kubernetes.io/projected/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-kube-api-access-xn4jh\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:49.508451 master-0 kubenswrapper[28149]: I0313 13:11:49.508170 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:49.508451 master-0 kubenswrapper[28149]: I0313 13:11:49.508182 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:49.508451 master-0 kubenswrapper[28149]: I0313 13:11:49.508193 28149 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:50.178032 master-0 kubenswrapper[28149]: I0313 13:11:50.177953 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.246284 master-0 kubenswrapper[28149]: I0313 13:11:50.245626 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ee0a2-api-0"] Mar 13 13:11:50.266325 master-0 kubenswrapper[28149]: I0313 13:11:50.266225 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ee0a2-api-0"] Mar 13 13:11:50.322513 master-0 kubenswrapper[28149]: I0313 13:11:50.322451 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ee0a2-api-0"] Mar 13 13:11:50.323091 master-0 kubenswrapper[28149]: E0313 13:11:50.323061 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" containerName="cinder-ee0a2-api-log" Mar 13 13:11:50.323091 master-0 kubenswrapper[28149]: I0313 13:11:50.323087 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" containerName="cinder-ee0a2-api-log" Mar 13 13:11:50.323218 master-0 kubenswrapper[28149]: E0313 13:11:50.323105 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" containerName="cinder-api" Mar 13 13:11:50.323218 master-0 kubenswrapper[28149]: I0313 13:11:50.323114 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" containerName="cinder-api" Mar 13 13:11:50.323218 master-0 kubenswrapper[28149]: E0313 13:11:50.323168 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d26441-5029-4cf0-9ef3-cba4ed2390e2" containerName="init" Mar 13 13:11:50.323218 master-0 kubenswrapper[28149]: I0313 13:11:50.323178 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d26441-5029-4cf0-9ef3-cba4ed2390e2" containerName="init" Mar 13 13:11:50.323218 master-0 kubenswrapper[28149]: E0313 13:11:50.323208 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d26441-5029-4cf0-9ef3-cba4ed2390e2" containerName="dnsmasq-dns" Mar 13 13:11:50.323218 master-0 kubenswrapper[28149]: I0313 13:11:50.323216 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d26441-5029-4cf0-9ef3-cba4ed2390e2" containerName="dnsmasq-dns" Mar 13 13:11:50.323500 master-0 kubenswrapper[28149]: I0313 13:11:50.323472 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d26441-5029-4cf0-9ef3-cba4ed2390e2" containerName="dnsmasq-dns" Mar 13 13:11:50.323535 master-0 kubenswrapper[28149]: I0313 13:11:50.323521 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" containerName="cinder-ee0a2-api-log" Mar 13 13:11:50.323565 master-0 kubenswrapper[28149]: I0313 13:11:50.323558 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" containerName="cinder-api" Mar 13 13:11:50.327167 master-0 kubenswrapper[28149]: I0313 13:11:50.326099 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.335418 master-0 kubenswrapper[28149]: I0313 13:11:50.334812 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ee0a2-api-config-data" Mar 13 13:11:50.335418 master-0 kubenswrapper[28149]: I0313 13:11:50.335047 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 13 13:11:50.335418 master-0 kubenswrapper[28149]: I0313 13:11:50.335215 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 13 13:11:50.342374 master-0 kubenswrapper[28149]: I0313 13:11:50.339873 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-api-0"] Mar 13 13:11:50.443076 master-0 kubenswrapper[28149]: I0313 13:11:50.442856 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-public-tls-certs\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.443076 master-0 kubenswrapper[28149]: I0313 13:11:50.442909 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-config-data\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.443076 master-0 kubenswrapper[28149]: I0313 13:11:50.442969 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-internal-tls-certs\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.443076 master-0 kubenswrapper[28149]: I0313 13:11:50.443002 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbvff\" (UniqueName: \"kubernetes.io/projected/7a950989-4934-4057-8476-2476cd99542e-kube-api-access-zbvff\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.443076 master-0 kubenswrapper[28149]: I0313 13:11:50.443045 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a950989-4934-4057-8476-2476cd99542e-logs\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.443076 master-0 kubenswrapper[28149]: I0313 13:11:50.443083 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a950989-4934-4057-8476-2476cd99542e-etc-machine-id\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.443568 master-0 kubenswrapper[28149]: I0313 13:11:50.443129 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-scripts\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.443568 master-0 kubenswrapper[28149]: I0313 13:11:50.443223 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-config-data-custom\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.443568 master-0 kubenswrapper[28149]: I0313 13:11:50.443328 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-combined-ca-bundle\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.567021 master-0 kubenswrapper[28149]: I0313 13:11:50.565052 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-scripts\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.567021 master-0 kubenswrapper[28149]: I0313 13:11:50.565206 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-config-data-custom\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.567021 master-0 kubenswrapper[28149]: I0313 13:11:50.565315 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-combined-ca-bundle\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.567021 master-0 kubenswrapper[28149]: I0313 13:11:50.565379 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-public-tls-certs\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.567021 master-0 kubenswrapper[28149]: I0313 13:11:50.565401 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-config-data\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.567021 master-0 kubenswrapper[28149]: I0313 13:11:50.565432 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-internal-tls-certs\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.567021 master-0 kubenswrapper[28149]: I0313 13:11:50.565462 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbvff\" (UniqueName: \"kubernetes.io/projected/7a950989-4934-4057-8476-2476cd99542e-kube-api-access-zbvff\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.567021 master-0 kubenswrapper[28149]: I0313 13:11:50.565496 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a950989-4934-4057-8476-2476cd99542e-logs\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.567021 master-0 kubenswrapper[28149]: I0313 13:11:50.565518 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a950989-4934-4057-8476-2476cd99542e-etc-machine-id\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.567021 master-0 kubenswrapper[28149]: I0313 13:11:50.565643 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a950989-4934-4057-8476-2476cd99542e-etc-machine-id\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.570206 master-0 kubenswrapper[28149]: I0313 13:11:50.570118 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a950989-4934-4057-8476-2476cd99542e-logs\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.576368 master-0 kubenswrapper[28149]: I0313 13:11:50.573787 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-config-data\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.576368 master-0 kubenswrapper[28149]: I0313 13:11:50.574427 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-combined-ca-bundle\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.576368 master-0 kubenswrapper[28149]: I0313 13:11:50.574701 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-public-tls-certs\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.576368 master-0 kubenswrapper[28149]: I0313 13:11:50.575705 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-config-data-custom\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.577891 master-0 kubenswrapper[28149]: I0313 13:11:50.577855 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-scripts\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.578637 master-0 kubenswrapper[28149]: I0313 13:11:50.578605 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a950989-4934-4057-8476-2476cd99542e-internal-tls-certs\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.594300 master-0 kubenswrapper[28149]: I0313 13:11:50.594241 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbvff\" (UniqueName: \"kubernetes.io/projected/7a950989-4934-4057-8476-2476cd99542e-kube-api-access-zbvff\") pod \"cinder-ee0a2-api-0\" (UID: \"7a950989-4934-4057-8476-2476cd99542e\") " pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.721477 master-0 kubenswrapper[28149]: I0313 13:11:50.719747 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-api-0" Mar 13 13:11:50.721477 master-0 kubenswrapper[28149]: I0313 13:11:50.720313 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4f641ac-5f11-4ad6-b2d1-430b1fef36dd" path="/var/lib/kubelet/pods/e4f641ac-5f11-4ad6-b2d1-430b1fef36dd/volumes" Mar 13 13:11:50.735243 master-0 kubenswrapper[28149]: I0313 13:11:50.735197 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:11:50.915779 master-0 kubenswrapper[28149]: I0313 13:11:50.915726 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-combined-ca-bundle\") pod \"0b7e43c1-e19e-4691-a5b4-2a2197764944\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " Mar 13 13:11:50.916248 master-0 kubenswrapper[28149]: I0313 13:11:50.916218 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-scripts\") pod \"0b7e43c1-e19e-4691-a5b4-2a2197764944\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " Mar 13 13:11:50.916432 master-0 kubenswrapper[28149]: I0313 13:11:50.916412 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0b7e43c1-e19e-4691-a5b4-2a2197764944-etc-podinfo\") pod \"0b7e43c1-e19e-4691-a5b4-2a2197764944\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " Mar 13 13:11:50.916583 master-0 kubenswrapper[28149]: I0313 13:11:50.916565 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0b7e43c1-e19e-4691-a5b4-2a2197764944-config-data-merged\") pod \"0b7e43c1-e19e-4691-a5b4-2a2197764944\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " Mar 13 13:11:50.916794 master-0 kubenswrapper[28149]: I0313 13:11:50.916775 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-config-data\") pod \"0b7e43c1-e19e-4691-a5b4-2a2197764944\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " Mar 13 13:11:50.917022 master-0 kubenswrapper[28149]: I0313 13:11:50.917002 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnlf9\" (UniqueName: \"kubernetes.io/projected/0b7e43c1-e19e-4691-a5b4-2a2197764944-kube-api-access-wnlf9\") pod \"0b7e43c1-e19e-4691-a5b4-2a2197764944\" (UID: \"0b7e43c1-e19e-4691-a5b4-2a2197764944\") " Mar 13 13:11:50.921716 master-0 kubenswrapper[28149]: I0313 13:11:50.921347 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0b7e43c1-e19e-4691-a5b4-2a2197764944-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "0b7e43c1-e19e-4691-a5b4-2a2197764944" (UID: "0b7e43c1-e19e-4691-a5b4-2a2197764944"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 13 13:11:50.923923 master-0 kubenswrapper[28149]: I0313 13:11:50.923777 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b7e43c1-e19e-4691-a5b4-2a2197764944-kube-api-access-wnlf9" (OuterVolumeSpecName: "kube-api-access-wnlf9") pod "0b7e43c1-e19e-4691-a5b4-2a2197764944" (UID: "0b7e43c1-e19e-4691-a5b4-2a2197764944"). InnerVolumeSpecName "kube-api-access-wnlf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:11:50.925712 master-0 kubenswrapper[28149]: I0313 13:11:50.925652 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b7e43c1-e19e-4691-a5b4-2a2197764944-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "0b7e43c1-e19e-4691-a5b4-2a2197764944" (UID: "0b7e43c1-e19e-4691-a5b4-2a2197764944"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:11:50.929309 master-0 kubenswrapper[28149]: I0313 13:11:50.929260 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-scripts" (OuterVolumeSpecName: "scripts") pod "0b7e43c1-e19e-4691-a5b4-2a2197764944" (UID: "0b7e43c1-e19e-4691-a5b4-2a2197764944"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:50.973594 master-0 kubenswrapper[28149]: I0313 13:11:50.973411 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-config-data" (OuterVolumeSpecName: "config-data") pod "0b7e43c1-e19e-4691-a5b4-2a2197764944" (UID: "0b7e43c1-e19e-4691-a5b4-2a2197764944"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:51.006333 master-0 kubenswrapper[28149]: I0313 13:11:51.006249 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b7e43c1-e19e-4691-a5b4-2a2197764944" (UID: "0b7e43c1-e19e-4691-a5b4-2a2197764944"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:51.021431 master-0 kubenswrapper[28149]: I0313 13:11:51.021045 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnlf9\" (UniqueName: \"kubernetes.io/projected/0b7e43c1-e19e-4691-a5b4-2a2197764944-kube-api-access-wnlf9\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:51.021431 master-0 kubenswrapper[28149]: I0313 13:11:51.021093 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:51.021431 master-0 kubenswrapper[28149]: I0313 13:11:51.021106 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:51.021431 master-0 kubenswrapper[28149]: I0313 13:11:51.021121 28149 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0b7e43c1-e19e-4691-a5b4-2a2197764944-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:51.021431 master-0 kubenswrapper[28149]: I0313 13:11:51.021131 28149 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0b7e43c1-e19e-4691-a5b4-2a2197764944-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:51.021431 master-0 kubenswrapper[28149]: I0313 13:11:51.021157 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b7e43c1-e19e-4691-a5b4-2a2197764944-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:51.223548 master-0 kubenswrapper[28149]: I0313 13:11:51.222636 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-h8h9t" event={"ID":"0b7e43c1-e19e-4691-a5b4-2a2197764944","Type":"ContainerDied","Data":"1dc772d7e244f33638fc3fe5355cf46fcf91c6152e45c7da565bf2a54c7c335e"} Mar 13 13:11:51.223548 master-0 kubenswrapper[28149]: I0313 13:11:51.222689 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dc772d7e244f33638fc3fe5355cf46fcf91c6152e45c7da565bf2a54c7c335e" Mar 13 13:11:51.223548 master-0 kubenswrapper[28149]: I0313 13:11:51.222787 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-h8h9t" Mar 13 13:11:51.302291 master-0 kubenswrapper[28149]: I0313 13:11:51.302246 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-api-0"] Mar 13 13:11:51.622025 master-0 kubenswrapper[28149]: I0313 13:11:51.620467 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:51.701161 master-0 kubenswrapper[28149]: I0313 13:11:51.700797 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:11:51.882669 master-0 kubenswrapper[28149]: I0313 13:11:51.882538 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc5fdb9b9-7mhs2"] Mar 13 13:11:51.882889 master-0 kubenswrapper[28149]: I0313 13:11:51.882787 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" podUID="3e202aeb-6913-4506-ba76-63feb8748d60" containerName="dnsmasq-dns" containerID="cri-o://18a8745e58a1e2ccfb89e849e5546947cae1ce0bdf6f20e7de69b1de1c8e5a3d" gracePeriod=10 Mar 13 13:11:52.295323 master-0 kubenswrapper[28149]: I0313 13:11:52.294612 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-api-0" event={"ID":"7a950989-4934-4057-8476-2476cd99542e","Type":"ContainerStarted","Data":"896ce5e5e6d072e6f5712f9b378dc864cb7e0bbd464dfbedce6b72fd980d9da8"} Mar 13 13:11:52.750160 master-0 kubenswrapper[28149]: I0313 13:11:52.748577 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:53.378684 master-0 kubenswrapper[28149]: I0313 13:11:53.376940 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ee0a2-scheduler-0"] Mar 13 13:11:53.389105 master-0 kubenswrapper[28149]: I0313 13:11:53.389042 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-vdc8v"] Mar 13 13:11:53.389825 master-0 kubenswrapper[28149]: E0313 13:11:53.389796 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7e43c1-e19e-4691-a5b4-2a2197764944" containerName="ironic-db-sync" Mar 13 13:11:53.389825 master-0 kubenswrapper[28149]: I0313 13:11:53.389818 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7e43c1-e19e-4691-a5b4-2a2197764944" containerName="ironic-db-sync" Mar 13 13:11:53.389914 master-0 kubenswrapper[28149]: E0313 13:11:53.389838 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7e43c1-e19e-4691-a5b4-2a2197764944" containerName="init" Mar 13 13:11:53.389914 master-0 kubenswrapper[28149]: I0313 13:11:53.389845 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7e43c1-e19e-4691-a5b4-2a2197764944" containerName="init" Mar 13 13:11:53.390179 master-0 kubenswrapper[28149]: I0313 13:11:53.390150 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b7e43c1-e19e-4691-a5b4-2a2197764944" containerName="ironic-db-sync" Mar 13 13:11:53.391200 master-0 kubenswrapper[28149]: I0313 13:11:53.391182 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-vdc8v" Mar 13 13:11:53.404921 master-0 kubenswrapper[28149]: I0313 13:11:53.404857 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-vdc8v"] Mar 13 13:11:53.471356 master-0 kubenswrapper[28149]: I0313 13:11:53.466860 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:11:53.501998 master-0 kubenswrapper[28149]: I0313 13:11:53.501944 28149 generic.go:334] "Generic (PLEG): container finished" podID="3e202aeb-6913-4506-ba76-63feb8748d60" containerID="18a8745e58a1e2ccfb89e849e5546947cae1ce0bdf6f20e7de69b1de1c8e5a3d" exitCode=0 Mar 13 13:11:53.502496 master-0 kubenswrapper[28149]: I0313 13:11:53.502110 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-8454cbf95d-4wvx9"] Mar 13 13:11:53.502496 master-0 kubenswrapper[28149]: I0313 13:11:53.502457 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ee0a2-scheduler-0" podUID="8c7dd334-6af4-4528-9a21-d51e946a555b" containerName="cinder-scheduler" containerID="cri-o://3cb0d27e3219bc2532b415baa34498fa0a4a28387194a658dc55600c976399d5" gracePeriod=30 Mar 13 13:11:53.507057 master-0 kubenswrapper[28149]: I0313 13:11:53.506954 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ee0a2-scheduler-0" podUID="8c7dd334-6af4-4528-9a21-d51e946a555b" containerName="probe" containerID="cri-o://6c3428e0b54511966213f010f1f12cc38d9cc7f5452604763576c7d35c1098f8" gracePeriod=30 Mar 13 13:11:53.515058 master-0 kubenswrapper[28149]: I0313 13:11:53.512987 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" event={"ID":"3e202aeb-6913-4506-ba76-63feb8748d60","Type":"ContainerDied","Data":"18a8745e58a1e2ccfb89e849e5546947cae1ce0bdf6f20e7de69b1de1c8e5a3d"} Mar 13 13:11:53.515058 master-0 kubenswrapper[28149]: I0313 13:11:53.513128 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:11:53.526119 master-0 kubenswrapper[28149]: I0313 13:11:53.526081 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Mar 13 13:11:53.586324 master-0 kubenswrapper[28149]: I0313 13:11:53.573572 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50829c53-48ab-48a0-a68a-655b740ac823-operator-scripts\") pod \"ironic-inspector-db-create-vdc8v\" (UID: \"50829c53-48ab-48a0-a68a-655b740ac823\") " pod="openstack/ironic-inspector-db-create-vdc8v" Mar 13 13:11:53.586324 master-0 kubenswrapper[28149]: I0313 13:11:53.573709 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhvzh\" (UniqueName: \"kubernetes.io/projected/50829c53-48ab-48a0-a68a-655b740ac823-kube-api-access-lhvzh\") pod \"ironic-inspector-db-create-vdc8v\" (UID: \"50829c53-48ab-48a0-a68a-655b740ac823\") " pod="openstack/ironic-inspector-db-create-vdc8v" Mar 13 13:11:53.636816 master-0 kubenswrapper[28149]: I0313 13:11:53.634675 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0a87-account-create-update-8qh6t"] Mar 13 13:11:53.649750 master-0 kubenswrapper[28149]: I0313 13:11:53.646325 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0a87-account-create-update-8qh6t" Mar 13 13:11:53.655788 master-0 kubenswrapper[28149]: I0313 13:11:53.655573 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Mar 13 13:11:53.864766 master-0 kubenswrapper[28149]: I0313 13:11:53.862164 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/52f4f9dd-4956-4c8b-9a8d-c832a8049c3a-config\") pod \"ironic-neutron-agent-8454cbf95d-4wvx9\" (UID: \"52f4f9dd-4956-4c8b-9a8d-c832a8049c3a\") " pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:11:53.864766 master-0 kubenswrapper[28149]: I0313 13:11:53.862219 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxvtw\" (UniqueName: \"kubernetes.io/projected/52f4f9dd-4956-4c8b-9a8d-c832a8049c3a-kube-api-access-kxvtw\") pod \"ironic-neutron-agent-8454cbf95d-4wvx9\" (UID: \"52f4f9dd-4956-4c8b-9a8d-c832a8049c3a\") " pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:11:53.864766 master-0 kubenswrapper[28149]: I0313 13:11:53.862571 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhvzh\" (UniqueName: \"kubernetes.io/projected/50829c53-48ab-48a0-a68a-655b740ac823-kube-api-access-lhvzh\") pod \"ironic-inspector-db-create-vdc8v\" (UID: \"50829c53-48ab-48a0-a68a-655b740ac823\") " pod="openstack/ironic-inspector-db-create-vdc8v" Mar 13 13:11:53.864766 master-0 kubenswrapper[28149]: I0313 13:11:53.863038 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52f4f9dd-4956-4c8b-9a8d-c832a8049c3a-combined-ca-bundle\") pod \"ironic-neutron-agent-8454cbf95d-4wvx9\" (UID: \"52f4f9dd-4956-4c8b-9a8d-c832a8049c3a\") " pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:11:53.864766 master-0 kubenswrapper[28149]: I0313 13:11:53.863183 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50829c53-48ab-48a0-a68a-655b740ac823-operator-scripts\") pod \"ironic-inspector-db-create-vdc8v\" (UID: \"50829c53-48ab-48a0-a68a-655b740ac823\") " pod="openstack/ironic-inspector-db-create-vdc8v" Mar 13 13:11:53.864766 master-0 kubenswrapper[28149]: I0313 13:11:53.863422 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-8454cbf95d-4wvx9"] Mar 13 13:11:53.864766 master-0 kubenswrapper[28149]: I0313 13:11:53.864372 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50829c53-48ab-48a0-a68a-655b740ac823-operator-scripts\") pod \"ironic-inspector-db-create-vdc8v\" (UID: \"50829c53-48ab-48a0-a68a-655b740ac823\") " pod="openstack/ironic-inspector-db-create-vdc8v" Mar 13 13:11:53.901617 master-0 kubenswrapper[28149]: I0313 13:11:53.901483 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:11:53.932265 master-0 kubenswrapper[28149]: I0313 13:11:53.930024 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0a87-account-create-update-8qh6t"] Mar 13 13:11:53.965165 master-0 kubenswrapper[28149]: I0313 13:11:53.963752 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ee0a2-volume-lvm-iscsi-0"] Mar 13 13:11:53.972032 master-0 kubenswrapper[28149]: I0313 13:11:53.971977 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vphwj\" (UniqueName: \"kubernetes.io/projected/b574739d-3b02-4434-a7cd-c75404b73fd3-kube-api-access-vphwj\") pod \"ironic-inspector-0a87-account-create-update-8qh6t\" (UID: \"b574739d-3b02-4434-a7cd-c75404b73fd3\") " pod="openstack/ironic-inspector-0a87-account-create-update-8qh6t" Mar 13 13:11:53.973625 master-0 kubenswrapper[28149]: I0313 13:11:53.972367 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52f4f9dd-4956-4c8b-9a8d-c832a8049c3a-combined-ca-bundle\") pod \"ironic-neutron-agent-8454cbf95d-4wvx9\" (UID: \"52f4f9dd-4956-4c8b-9a8d-c832a8049c3a\") " pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:11:53.973625 master-0 kubenswrapper[28149]: I0313 13:11:53.972412 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b574739d-3b02-4434-a7cd-c75404b73fd3-operator-scripts\") pod \"ironic-inspector-0a87-account-create-update-8qh6t\" (UID: \"b574739d-3b02-4434-a7cd-c75404b73fd3\") " pod="openstack/ironic-inspector-0a87-account-create-update-8qh6t" Mar 13 13:11:53.973625 master-0 kubenswrapper[28149]: I0313 13:11:53.972515 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/52f4f9dd-4956-4c8b-9a8d-c832a8049c3a-config\") pod \"ironic-neutron-agent-8454cbf95d-4wvx9\" (UID: \"52f4f9dd-4956-4c8b-9a8d-c832a8049c3a\") " pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:11:53.973625 master-0 kubenswrapper[28149]: I0313 13:11:53.972550 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxvtw\" (UniqueName: \"kubernetes.io/projected/52f4f9dd-4956-4c8b-9a8d-c832a8049c3a-kube-api-access-kxvtw\") pod \"ironic-neutron-agent-8454cbf95d-4wvx9\" (UID: \"52f4f9dd-4956-4c8b-9a8d-c832a8049c3a\") " pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:11:54.043287 master-0 kubenswrapper[28149]: I0313 13:11:54.040370 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/52f4f9dd-4956-4c8b-9a8d-c832a8049c3a-config\") pod \"ironic-neutron-agent-8454cbf95d-4wvx9\" (UID: \"52f4f9dd-4956-4c8b-9a8d-c832a8049c3a\") " pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:11:54.043287 master-0 kubenswrapper[28149]: I0313 13:11:54.041041 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhvzh\" (UniqueName: \"kubernetes.io/projected/50829c53-48ab-48a0-a68a-655b740ac823-kube-api-access-lhvzh\") pod \"ironic-inspector-db-create-vdc8v\" (UID: \"50829c53-48ab-48a0-a68a-655b740ac823\") " pod="openstack/ironic-inspector-db-create-vdc8v" Mar 13 13:11:54.043287 master-0 kubenswrapper[28149]: I0313 13:11:54.041867 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52f4f9dd-4956-4c8b-9a8d-c832a8049c3a-combined-ca-bundle\") pod \"ironic-neutron-agent-8454cbf95d-4wvx9\" (UID: \"52f4f9dd-4956-4c8b-9a8d-c832a8049c3a\") " pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:11:54.054250 master-0 kubenswrapper[28149]: I0313 13:11:54.053449 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-vdc8v" Mar 13 13:11:54.090288 master-0 kubenswrapper[28149]: I0313 13:11:54.076964 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-ovsdbserver-nb\") pod \"3e202aeb-6913-4506-ba76-63feb8748d60\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " Mar 13 13:11:54.090288 master-0 kubenswrapper[28149]: I0313 13:11:54.077166 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-ovsdbserver-sb\") pod \"3e202aeb-6913-4506-ba76-63feb8748d60\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " Mar 13 13:11:54.090288 master-0 kubenswrapper[28149]: I0313 13:11:54.077242 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-config\") pod \"3e202aeb-6913-4506-ba76-63feb8748d60\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " Mar 13 13:11:54.090288 master-0 kubenswrapper[28149]: I0313 13:11:54.077340 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j644f\" (UniqueName: \"kubernetes.io/projected/3e202aeb-6913-4506-ba76-63feb8748d60-kube-api-access-j644f\") pod \"3e202aeb-6913-4506-ba76-63feb8748d60\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " Mar 13 13:11:54.090288 master-0 kubenswrapper[28149]: I0313 13:11:54.077389 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-dns-svc\") pod \"3e202aeb-6913-4506-ba76-63feb8748d60\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " Mar 13 13:11:54.090288 master-0 kubenswrapper[28149]: I0313 13:11:54.077414 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-dns-swift-storage-0\") pod \"3e202aeb-6913-4506-ba76-63feb8748d60\" (UID: \"3e202aeb-6913-4506-ba76-63feb8748d60\") " Mar 13 13:11:54.090288 master-0 kubenswrapper[28149]: I0313 13:11:54.078473 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vphwj\" (UniqueName: \"kubernetes.io/projected/b574739d-3b02-4434-a7cd-c75404b73fd3-kube-api-access-vphwj\") pod \"ironic-inspector-0a87-account-create-update-8qh6t\" (UID: \"b574739d-3b02-4434-a7cd-c75404b73fd3\") " pod="openstack/ironic-inspector-0a87-account-create-update-8qh6t" Mar 13 13:11:54.090288 master-0 kubenswrapper[28149]: I0313 13:11:54.078627 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b574739d-3b02-4434-a7cd-c75404b73fd3-operator-scripts\") pod \"ironic-inspector-0a87-account-create-update-8qh6t\" (UID: \"b574739d-3b02-4434-a7cd-c75404b73fd3\") " pod="openstack/ironic-inspector-0a87-account-create-update-8qh6t" Mar 13 13:11:54.090288 master-0 kubenswrapper[28149]: I0313 13:11:54.080464 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b574739d-3b02-4434-a7cd-c75404b73fd3-operator-scripts\") pod \"ironic-inspector-0a87-account-create-update-8qh6t\" (UID: \"b574739d-3b02-4434-a7cd-c75404b73fd3\") " pod="openstack/ironic-inspector-0a87-account-create-update-8qh6t" Mar 13 13:11:54.910647 master-0 kubenswrapper[28149]: I0313 13:11:54.902642 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b494447c-js6kq"] Mar 13 13:11:54.910647 master-0 kubenswrapper[28149]: E0313 13:11:54.904262 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e202aeb-6913-4506-ba76-63feb8748d60" containerName="init" Mar 13 13:11:54.910647 master-0 kubenswrapper[28149]: I0313 13:11:54.904296 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e202aeb-6913-4506-ba76-63feb8748d60" containerName="init" Mar 13 13:11:54.910647 master-0 kubenswrapper[28149]: E0313 13:11:54.904326 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e202aeb-6913-4506-ba76-63feb8748d60" containerName="dnsmasq-dns" Mar 13 13:11:54.910647 master-0 kubenswrapper[28149]: I0313 13:11:54.904333 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e202aeb-6913-4506-ba76-63feb8748d60" containerName="dnsmasq-dns" Mar 13 13:11:54.910647 master-0 kubenswrapper[28149]: I0313 13:11:54.905267 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e202aeb-6913-4506-ba76-63feb8748d60" containerName="dnsmasq-dns" Mar 13 13:11:54.959867 master-0 kubenswrapper[28149]: I0313 13:11:54.942111 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.453172 master-0 kubenswrapper[28149]: I0313 13:11:55.452671 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-ovsdbserver-nb\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.453172 master-0 kubenswrapper[28149]: I0313 13:11:55.452994 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxvtw\" (UniqueName: \"kubernetes.io/projected/52f4f9dd-4956-4c8b-9a8d-c832a8049c3a-kube-api-access-kxvtw\") pod \"ironic-neutron-agent-8454cbf95d-4wvx9\" (UID: \"52f4f9dd-4956-4c8b-9a8d-c832a8049c3a\") " pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:11:55.460725 master-0 kubenswrapper[28149]: I0313 13:11:55.454957 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-ovsdbserver-sb\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.513175 master-0 kubenswrapper[28149]: I0313 13:11:55.510379 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-dns-svc\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.513175 master-0 kubenswrapper[28149]: I0313 13:11:55.510493 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtcqf\" (UniqueName: \"kubernetes.io/projected/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-kube-api-access-vtcqf\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.513175 master-0 kubenswrapper[28149]: I0313 13:11:55.510747 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-config\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.513175 master-0 kubenswrapper[28149]: I0313 13:11:55.510955 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-dns-swift-storage-0\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.621182 master-0 kubenswrapper[28149]: I0313 13:11:55.569523 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e202aeb-6913-4506-ba76-63feb8748d60-kube-api-access-j644f" (OuterVolumeSpecName: "kube-api-access-j644f") pod "3e202aeb-6913-4506-ba76-63feb8748d60" (UID: "3e202aeb-6913-4506-ba76-63feb8748d60"). InnerVolumeSpecName "kube-api-access-j644f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:11:55.621182 master-0 kubenswrapper[28149]: I0313 13:11:55.611782 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vphwj\" (UniqueName: \"kubernetes.io/projected/b574739d-3b02-4434-a7cd-c75404b73fd3-kube-api-access-vphwj\") pod \"ironic-inspector-0a87-account-create-update-8qh6t\" (UID: \"b574739d-3b02-4434-a7cd-c75404b73fd3\") " pod="openstack/ironic-inspector-0a87-account-create-update-8qh6t" Mar 13 13:11:55.733337 master-0 kubenswrapper[28149]: I0313 13:11:55.732883 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:11:55.744387 master-0 kubenswrapper[28149]: I0313 13:11:55.734764 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0a87-account-create-update-8qh6t" Mar 13 13:11:55.780168 master-0 kubenswrapper[28149]: I0313 13:11:55.776369 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-config\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.780168 master-0 kubenswrapper[28149]: I0313 13:11:55.776606 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-dns-swift-storage-0\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.780168 master-0 kubenswrapper[28149]: I0313 13:11:55.776703 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-ovsdbserver-nb\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.780168 master-0 kubenswrapper[28149]: I0313 13:11:55.776811 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-ovsdbserver-sb\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.780168 master-0 kubenswrapper[28149]: I0313 13:11:55.776906 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-dns-svc\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.780168 master-0 kubenswrapper[28149]: I0313 13:11:55.776941 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtcqf\" (UniqueName: \"kubernetes.io/projected/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-kube-api-access-vtcqf\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.780168 master-0 kubenswrapper[28149]: I0313 13:11:55.777052 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j644f\" (UniqueName: \"kubernetes.io/projected/3e202aeb-6913-4506-ba76-63feb8748d60-kube-api-access-j644f\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:55.780168 master-0 kubenswrapper[28149]: I0313 13:11:55.778181 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-config\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.780168 master-0 kubenswrapper[28149]: I0313 13:11:55.778724 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-dns-swift-storage-0\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.780820 master-0 kubenswrapper[28149]: I0313 13:11:55.780735 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-ovsdbserver-nb\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.785153 master-0 kubenswrapper[28149]: I0313 13:11:55.781379 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-ovsdbserver-sb\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.785153 master-0 kubenswrapper[28149]: I0313 13:11:55.781937 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-dns-svc\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.842886 master-0 kubenswrapper[28149]: I0313 13:11:55.840386 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" podUID="a08dce85-d5c4-44e4-a3b0-e404c53b62f2" containerName="cinder-volume" containerID="cri-o://9526c59f3a2cba871431711a9cbbc5eb3ada1bdd4f300b13618a2aaf534ea349" gracePeriod=30 Mar 13 13:11:55.842886 master-0 kubenswrapper[28149]: I0313 13:11:55.840574 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" podUID="a08dce85-d5c4-44e4-a3b0-e404c53b62f2" containerName="probe" containerID="cri-o://7bb569460a6f2eb1cef8e8cce8c284a41fd3b9d31bee3e4adf73f698d6c89770" gracePeriod=30 Mar 13 13:11:55.842886 master-0 kubenswrapper[28149]: I0313 13:11:55.840656 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" Mar 13 13:11:55.990257 master-0 kubenswrapper[28149]: I0313 13:11:55.989054 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtcqf\" (UniqueName: \"kubernetes.io/projected/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-kube-api-access-vtcqf\") pod \"dnsmasq-dns-67b494447c-js6kq\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:55.990257 master-0 kubenswrapper[28149]: I0313 13:11:55.989862 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:11:56.070667 master-0 kubenswrapper[28149]: E0313 13:11:56.070537 28149 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.128s" Mar 13 13:11:56.070667 master-0 kubenswrapper[28149]: I0313 13:11:56.070680 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:11:56.070973 master-0 kubenswrapper[28149]: I0313 13:11:56.070844 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b494447c-js6kq"] Mar 13 13:11:56.070973 master-0 kubenswrapper[28149]: I0313 13:11:56.070871 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc5fdb9b9-7mhs2" event={"ID":"3e202aeb-6913-4506-ba76-63feb8748d60","Type":"ContainerDied","Data":"f7b63933386cfdb779c47799d353ad208e69de7ec1064fb5f78f06cfef6511d0"} Mar 13 13:11:56.070973 master-0 kubenswrapper[28149]: I0313 13:11:56.070917 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-6cf5cb77b5-nrxbr"] Mar 13 13:11:56.077728 master-0 kubenswrapper[28149]: I0313 13:11:56.077574 28149 scope.go:117] "RemoveContainer" containerID="18a8745e58a1e2ccfb89e849e5546947cae1ce0bdf6f20e7de69b1de1c8e5a3d" Mar 13 13:11:56.118204 master-0 kubenswrapper[28149]: I0313 13:11:56.115219 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.122048 master-0 kubenswrapper[28149]: I0313 13:11:56.120061 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6cf5cb77b5-nrxbr"] Mar 13 13:11:56.125785 master-0 kubenswrapper[28149]: I0313 13:11:56.123303 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 13 13:11:56.125785 master-0 kubenswrapper[28149]: I0313 13:11:56.123625 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Mar 13 13:11:56.125785 master-0 kubenswrapper[28149]: I0313 13:11:56.123662 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 13 13:11:56.125785 master-0 kubenswrapper[28149]: I0313 13:11:56.123797 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Mar 13 13:11:56.125785 master-0 kubenswrapper[28149]: I0313 13:11:56.123986 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Mar 13 13:11:56.202552 master-0 kubenswrapper[28149]: I0313 13:11:56.190209 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-config" (OuterVolumeSpecName: "config") pod "3e202aeb-6913-4506-ba76-63feb8748d60" (UID: "3e202aeb-6913-4506-ba76-63feb8748d60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:11:56.228210 master-0 kubenswrapper[28149]: I0313 13:11:56.224063 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpbw6\" (UniqueName: \"kubernetes.io/projected/d6997886-21ba-4767-a3f9-82bb99c7c39a-kube-api-access-kpbw6\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.228210 master-0 kubenswrapper[28149]: I0313 13:11:56.224203 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-scripts\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.228210 master-0 kubenswrapper[28149]: I0313 13:11:56.224279 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6997886-21ba-4767-a3f9-82bb99c7c39a-logs\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.228210 master-0 kubenswrapper[28149]: I0313 13:11:56.224377 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.228210 master-0 kubenswrapper[28149]: I0313 13:11:56.224470 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data-merged\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.228210 master-0 kubenswrapper[28149]: I0313 13:11:56.224536 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data-custom\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.228210 master-0 kubenswrapper[28149]: I0313 13:11:56.224583 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-combined-ca-bundle\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.228210 master-0 kubenswrapper[28149]: I0313 13:11:56.224662 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d6997886-21ba-4767-a3f9-82bb99c7c39a-etc-podinfo\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.228210 master-0 kubenswrapper[28149]: I0313 13:11:56.224739 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:56.242491 master-0 kubenswrapper[28149]: I0313 13:11:56.238010 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ee0a2-backup-0"] Mar 13 13:11:56.242491 master-0 kubenswrapper[28149]: I0313 13:11:56.240402 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3e202aeb-6913-4506-ba76-63feb8748d60" (UID: "3e202aeb-6913-4506-ba76-63feb8748d60"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:11:56.252211 master-0 kubenswrapper[28149]: I0313 13:11:56.246888 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3e202aeb-6913-4506-ba76-63feb8748d60" (UID: "3e202aeb-6913-4506-ba76-63feb8748d60"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:11:56.329939 master-0 kubenswrapper[28149]: I0313 13:11:56.328529 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d6997886-21ba-4767-a3f9-82bb99c7c39a-etc-podinfo\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.329939 master-0 kubenswrapper[28149]: I0313 13:11:56.328677 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpbw6\" (UniqueName: \"kubernetes.io/projected/d6997886-21ba-4767-a3f9-82bb99c7c39a-kube-api-access-kpbw6\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.329939 master-0 kubenswrapper[28149]: I0313 13:11:56.328711 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-scripts\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.329939 master-0 kubenswrapper[28149]: I0313 13:11:56.328769 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6997886-21ba-4767-a3f9-82bb99c7c39a-logs\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.329939 master-0 kubenswrapper[28149]: I0313 13:11:56.329253 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.329939 master-0 kubenswrapper[28149]: I0313 13:11:56.329846 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6997886-21ba-4767-a3f9-82bb99c7c39a-logs\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.331183 master-0 kubenswrapper[28149]: I0313 13:11:56.330413 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data-merged\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.331183 master-0 kubenswrapper[28149]: I0313 13:11:56.330575 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data-custom\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.331183 master-0 kubenswrapper[28149]: I0313 13:11:56.330604 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-combined-ca-bundle\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.331183 master-0 kubenswrapper[28149]: I0313 13:11:56.330760 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:56.331183 master-0 kubenswrapper[28149]: I0313 13:11:56.330774 28149 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:56.337024 master-0 kubenswrapper[28149]: I0313 13:11:56.336888 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data-merged\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.351692 master-0 kubenswrapper[28149]: I0313 13:11:56.351433 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-combined-ca-bundle\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.359981 master-0 kubenswrapper[28149]: I0313 13:11:56.357588 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data-custom\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.419018 master-0 kubenswrapper[28149]: I0313 13:11:56.385537 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d6997886-21ba-4767-a3f9-82bb99c7c39a-etc-podinfo\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.419018 master-0 kubenswrapper[28149]: I0313 13:11:56.385551 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-scripts\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.419018 master-0 kubenswrapper[28149]: I0313 13:11:56.386489 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.419018 master-0 kubenswrapper[28149]: I0313 13:11:56.400891 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e202aeb-6913-4506-ba76-63feb8748d60" (UID: "3e202aeb-6913-4506-ba76-63feb8748d60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:11:56.435754 master-0 kubenswrapper[28149]: I0313 13:11:56.434140 28149 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:56.448297 master-0 kubenswrapper[28149]: I0313 13:11:56.446018 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpbw6\" (UniqueName: \"kubernetes.io/projected/d6997886-21ba-4767-a3f9-82bb99c7c39a-kube-api-access-kpbw6\") pod \"ironic-6cf5cb77b5-nrxbr\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.826579 master-0 kubenswrapper[28149]: I0313 13:11:56.826540 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:11:56.877836 master-0 kubenswrapper[28149]: I0313 13:11:56.875342 28149 scope.go:117] "RemoveContainer" containerID="8b064974b1c0d3e17c6e1220ecf80ddcf5ddf54886bb12d75941a2f5ecc0cd1d" Mar 13 13:11:57.142242 master-0 kubenswrapper[28149]: I0313 13:11:57.141404 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3e202aeb-6913-4506-ba76-63feb8748d60" (UID: "3e202aeb-6913-4506-ba76-63feb8748d60"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:11:57.175961 master-0 kubenswrapper[28149]: I0313 13:11:57.161043 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e202aeb-6913-4506-ba76-63feb8748d60-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:57.192150 master-0 kubenswrapper[28149]: I0313 13:11:57.190783 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ee0a2-backup-0" podUID="5542dffa-edbf-4133-b7cc-2631121726dc" containerName="cinder-backup" containerID="cri-o://727d8e30c5ed7c69b55a98f2363e3c8df4840dde2b23de1158ff5b34eb4d3617" gracePeriod=30 Mar 13 13:11:57.192150 master-0 kubenswrapper[28149]: I0313 13:11:57.191639 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ee0a2-backup-0" podUID="5542dffa-edbf-4133-b7cc-2631121726dc" containerName="probe" containerID="cri-o://8d866a757c903f73361a05a85d828507b5d29c24c685ef09179cf6eb95a3969f" gracePeriod=30 Mar 13 13:11:57.317328 master-0 kubenswrapper[28149]: I0313 13:11:57.304089 28149 trace.go:236] Trace[1825472298]: "Calculate volume metrics of cache for pod openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-dv8rj" (13-Mar-2026 13:11:56.075) (total time: 1228ms): Mar 13 13:11:57.317328 master-0 kubenswrapper[28149]: Trace[1825472298]: [1.228611403s] [1.228611403s] END Mar 13 13:11:57.347261 master-0 kubenswrapper[28149]: I0313 13:11:57.347196 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-vdc8v"] Mar 13 13:11:57.661355 master-0 kubenswrapper[28149]: I0313 13:11:57.661247 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-8454cbf95d-4wvx9"] Mar 13 13:11:57.710982 master-0 kubenswrapper[28149]: I0313 13:11:57.710928 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc5fdb9b9-7mhs2"] Mar 13 13:11:58.053487 master-0 kubenswrapper[28149]: I0313 13:11:58.053426 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc5fdb9b9-7mhs2"] Mar 13 13:11:58.248087 master-0 kubenswrapper[28149]: I0313 13:11:58.247993 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b494447c-js6kq"] Mar 13 13:11:58.331852 master-0 kubenswrapper[28149]: I0313 13:11:58.331788 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" event={"ID":"52f4f9dd-4956-4c8b-9a8d-c832a8049c3a","Type":"ContainerStarted","Data":"2b4c38d58d11a0a7ddf49a6188d353d97a504f880c09cb0ada79acb51b616b83"} Mar 13 13:11:58.342405 master-0 kubenswrapper[28149]: I0313 13:11:58.337481 28149 generic.go:334] "Generic (PLEG): container finished" podID="8c7dd334-6af4-4528-9a21-d51e946a555b" containerID="6c3428e0b54511966213f010f1f12cc38d9cc7f5452604763576c7d35c1098f8" exitCode=0 Mar 13 13:11:58.342405 master-0 kubenswrapper[28149]: I0313 13:11:58.337532 28149 generic.go:334] "Generic (PLEG): container finished" podID="8c7dd334-6af4-4528-9a21-d51e946a555b" containerID="3cb0d27e3219bc2532b415baa34498fa0a4a28387194a658dc55600c976399d5" exitCode=0 Mar 13 13:11:58.342405 master-0 kubenswrapper[28149]: I0313 13:11:58.337650 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-scheduler-0" event={"ID":"8c7dd334-6af4-4528-9a21-d51e946a555b","Type":"ContainerDied","Data":"6c3428e0b54511966213f010f1f12cc38d9cc7f5452604763576c7d35c1098f8"} Mar 13 13:11:58.342405 master-0 kubenswrapper[28149]: I0313 13:11:58.337695 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-scheduler-0" event={"ID":"8c7dd334-6af4-4528-9a21-d51e946a555b","Type":"ContainerDied","Data":"3cb0d27e3219bc2532b415baa34498fa0a4a28387194a658dc55600c976399d5"} Mar 13 13:11:58.347352 master-0 kubenswrapper[28149]: I0313 13:11:58.347229 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-api-0" event={"ID":"7a950989-4934-4057-8476-2476cd99542e","Type":"ContainerStarted","Data":"d86a78b3b227ac02fa65ae55d527664d236c1d5a51a6c16c226499d0af6c4cc4"} Mar 13 13:11:58.386781 master-0 kubenswrapper[28149]: I0313 13:11:58.385520 28149 generic.go:334] "Generic (PLEG): container finished" podID="a08dce85-d5c4-44e4-a3b0-e404c53b62f2" containerID="9526c59f3a2cba871431711a9cbbc5eb3ada1bdd4f300b13618a2aaf534ea349" exitCode=0 Mar 13 13:11:58.386781 master-0 kubenswrapper[28149]: I0313 13:11:58.385661 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" event={"ID":"a08dce85-d5c4-44e4-a3b0-e404c53b62f2","Type":"ContainerDied","Data":"9526c59f3a2cba871431711a9cbbc5eb3ada1bdd4f300b13618a2aaf534ea349"} Mar 13 13:11:58.402346 master-0 kubenswrapper[28149]: I0313 13:11:58.397247 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-vdc8v" event={"ID":"50829c53-48ab-48a0-a68a-655b740ac823","Type":"ContainerStarted","Data":"574ff0a472dbbb11c12271648ce872ad524da18b99776decb15d31096f860a89"} Mar 13 13:11:58.460474 master-0 kubenswrapper[28149]: I0313 13:11:58.460411 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0a87-account-create-update-8qh6t"] Mar 13 13:11:58.774308 master-0 kubenswrapper[28149]: I0313 13:11:58.765508 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e202aeb-6913-4506-ba76-63feb8748d60" path="/var/lib/kubelet/pods/3e202aeb-6913-4506-ba76-63feb8748d60/volumes" Mar 13 13:11:58.961347 master-0 kubenswrapper[28149]: I0313 13:11:58.956104 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6cf5cb77b5-nrxbr"] Mar 13 13:11:59.079666 master-0 kubenswrapper[28149]: I0313 13:11:59.079253 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:59.086897 master-0 kubenswrapper[28149]: I0313 13:11:59.081596 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b96699696-vzrgx" Mar 13 13:11:59.191205 master-0 kubenswrapper[28149]: I0313 13:11:59.186513 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:59.242905 master-0 kubenswrapper[28149]: I0313 13:11:59.239792 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-config-data\") pod \"8c7dd334-6af4-4528-9a21-d51e946a555b\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " Mar 13 13:11:59.242905 master-0 kubenswrapper[28149]: I0313 13:11:59.240038 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rjqn\" (UniqueName: \"kubernetes.io/projected/8c7dd334-6af4-4528-9a21-d51e946a555b-kube-api-access-2rjqn\") pod \"8c7dd334-6af4-4528-9a21-d51e946a555b\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " Mar 13 13:11:59.242905 master-0 kubenswrapper[28149]: I0313 13:11:59.240074 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-config-data-custom\") pod \"8c7dd334-6af4-4528-9a21-d51e946a555b\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " Mar 13 13:11:59.242905 master-0 kubenswrapper[28149]: I0313 13:11:59.240202 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-scripts\") pod \"8c7dd334-6af4-4528-9a21-d51e946a555b\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " Mar 13 13:11:59.242905 master-0 kubenswrapper[28149]: I0313 13:11:59.240277 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-combined-ca-bundle\") pod \"8c7dd334-6af4-4528-9a21-d51e946a555b\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " Mar 13 13:11:59.242905 master-0 kubenswrapper[28149]: I0313 13:11:59.240329 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c7dd334-6af4-4528-9a21-d51e946a555b-etc-machine-id\") pod \"8c7dd334-6af4-4528-9a21-d51e946a555b\" (UID: \"8c7dd334-6af4-4528-9a21-d51e946a555b\") " Mar 13 13:11:59.242905 master-0 kubenswrapper[28149]: I0313 13:11:59.240965 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c7dd334-6af4-4528-9a21-d51e946a555b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8c7dd334-6af4-4528-9a21-d51e946a555b" (UID: "8c7dd334-6af4-4528-9a21-d51e946a555b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:11:59.307595 master-0 kubenswrapper[28149]: I0313 13:11:59.279029 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-scripts" (OuterVolumeSpecName: "scripts") pod "8c7dd334-6af4-4528-9a21-d51e946a555b" (UID: "8c7dd334-6af4-4528-9a21-d51e946a555b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:59.307595 master-0 kubenswrapper[28149]: I0313 13:11:59.293584 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8c7dd334-6af4-4528-9a21-d51e946a555b" (UID: "8c7dd334-6af4-4528-9a21-d51e946a555b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:59.307595 master-0 kubenswrapper[28149]: I0313 13:11:59.304447 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c7dd334-6af4-4528-9a21-d51e946a555b-kube-api-access-2rjqn" (OuterVolumeSpecName: "kube-api-access-2rjqn") pod "8c7dd334-6af4-4528-9a21-d51e946a555b" (UID: "8c7dd334-6af4-4528-9a21-d51e946a555b"). InnerVolumeSpecName "kube-api-access-2rjqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:11:59.354697 master-0 kubenswrapper[28149]: I0313 13:11:59.343794 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-699d7776-9kkdk"] Mar 13 13:11:59.354697 master-0 kubenswrapper[28149]: I0313 13:11:59.344144 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-699d7776-9kkdk" podUID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerName="placement-log" containerID="cri-o://d83b312654b71814018bf82aefcc44782c1b8a50ca051dac7c42951c264b572f" gracePeriod=30 Mar 13 13:11:59.354697 master-0 kubenswrapper[28149]: I0313 13:11:59.344685 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-699d7776-9kkdk" podUID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerName="placement-api" containerID="cri-o://0de68570e0c25f556b25a0d514a40bf6e6b23fdd944c31b42c9a5dee0c0f377f" gracePeriod=30 Mar 13 13:11:59.354697 master-0 kubenswrapper[28149]: I0313 13:11:59.350020 28149 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c7dd334-6af4-4528-9a21-d51e946a555b-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:59.354697 master-0 kubenswrapper[28149]: I0313 13:11:59.350091 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rjqn\" (UniqueName: \"kubernetes.io/projected/8c7dd334-6af4-4528-9a21-d51e946a555b-kube-api-access-2rjqn\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:59.354697 master-0 kubenswrapper[28149]: I0313 13:11:59.350108 28149 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:59.354697 master-0 kubenswrapper[28149]: I0313 13:11:59.350179 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:59.393469 master-0 kubenswrapper[28149]: I0313 13:11:59.368652 28149 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/placement-699d7776-9kkdk" podUID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerName="placement-log" probeResult="failure" output="Get \"https://10.128.0.220:8778/\": EOF" Mar 13 13:11:59.393469 master-0 kubenswrapper[28149]: I0313 13:11:59.368755 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-699d7776-9kkdk" podUID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerName="placement-api" probeResult="failure" output="Get \"https://10.128.0.220:8778/\": EOF" Mar 13 13:11:59.393469 master-0 kubenswrapper[28149]: I0313 13:11:59.368843 28149 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/placement-699d7776-9kkdk" podUID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerName="placement-api" probeResult="failure" output="Get \"https://10.128.0.220:8778/\": EOF" Mar 13 13:11:59.393469 master-0 kubenswrapper[28149]: I0313 13:11:59.369559 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-699d7776-9kkdk" podUID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerName="placement-log" probeResult="failure" output="Get \"https://10.128.0.220:8778/\": EOF" Mar 13 13:11:59.493920 master-0 kubenswrapper[28149]: I0313 13:11:59.493830 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0a87-account-create-update-8qh6t" event={"ID":"b574739d-3b02-4434-a7cd-c75404b73fd3","Type":"ContainerStarted","Data":"74f01847610f8475403d891f52537cab7feae2ea704616a6818bc2120a91880c"} Mar 13 13:11:59.493920 master-0 kubenswrapper[28149]: I0313 13:11:59.493895 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0a87-account-create-update-8qh6t" event={"ID":"b574739d-3b02-4434-a7cd-c75404b73fd3","Type":"ContainerStarted","Data":"70bfb2b84c7e92744ed11ea7347cc414aa4a2a8e1df550010c923e9d46be7f66"} Mar 13 13:11:59.551271 master-0 kubenswrapper[28149]: I0313 13:11:59.546620 28149 generic.go:334] "Generic (PLEG): container finished" podID="5542dffa-edbf-4133-b7cc-2631121726dc" containerID="8d866a757c903f73361a05a85d828507b5d29c24c685ef09179cf6eb95a3969f" exitCode=0 Mar 13 13:11:59.551271 master-0 kubenswrapper[28149]: I0313 13:11:59.546702 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-backup-0" event={"ID":"5542dffa-edbf-4133-b7cc-2631121726dc","Type":"ContainerDied","Data":"8d866a757c903f73361a05a85d828507b5d29c24c685ef09179cf6eb95a3969f"} Mar 13 13:11:59.551271 master-0 kubenswrapper[28149]: I0313 13:11:59.548404 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0a87-account-create-update-8qh6t" podStartSLOduration=6.548390632 podStartE2EDuration="6.548390632s" podCreationTimestamp="2026-03-13 13:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:11:59.542140844 +0000 UTC m=+1093.195606013" watchObservedRunningTime="2026-03-13 13:11:59.548390632 +0000 UTC m=+1093.201855791" Mar 13 13:11:59.551271 master-0 kubenswrapper[28149]: I0313 13:11:59.550013 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-config-data" (OuterVolumeSpecName: "config-data") pod "8c7dd334-6af4-4528-9a21-d51e946a555b" (UID: "8c7dd334-6af4-4528-9a21-d51e946a555b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:59.551271 master-0 kubenswrapper[28149]: I0313 13:11:59.550513 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c7dd334-6af4-4528-9a21-d51e946a555b" (UID: "8c7dd334-6af4-4528-9a21-d51e946a555b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:11:59.592592 master-0 kubenswrapper[28149]: I0313 13:11:59.590593 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:59.592592 master-0 kubenswrapper[28149]: I0313 13:11:59.590635 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c7dd334-6af4-4528-9a21-d51e946a555b-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:11:59.610983 master-0 kubenswrapper[28149]: I0313 13:11:59.610926 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-scheduler-0" event={"ID":"8c7dd334-6af4-4528-9a21-d51e946a555b","Type":"ContainerDied","Data":"c8c26f8b0a103db78db5b6d06aa5731da2cc96bac7f7d349aa25689ba78a8daf"} Mar 13 13:11:59.610983 master-0 kubenswrapper[28149]: I0313 13:11:59.610987 28149 scope.go:117] "RemoveContainer" containerID="6c3428e0b54511966213f010f1f12cc38d9cc7f5452604763576c7d35c1098f8" Mar 13 13:11:59.611256 master-0 kubenswrapper[28149]: I0313 13:11:59.611168 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:59.629549 master-0 kubenswrapper[28149]: I0313 13:11:59.629037 28149 generic.go:334] "Generic (PLEG): container finished" podID="e3d4527d-7eb8-488f-bcc7-8c6bd2be3351" containerID="c0c546ec0079f497909d97704c6b31fac8273a6fc1c7a904d8fa831d5f497489" exitCode=0 Mar 13 13:11:59.629549 master-0 kubenswrapper[28149]: I0313 13:11:59.629112 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b494447c-js6kq" event={"ID":"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351","Type":"ContainerDied","Data":"c0c546ec0079f497909d97704c6b31fac8273a6fc1c7a904d8fa831d5f497489"} Mar 13 13:11:59.629549 master-0 kubenswrapper[28149]: I0313 13:11:59.629223 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b494447c-js6kq" event={"ID":"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351","Type":"ContainerStarted","Data":"349a6d0dcd3e766f2c0ab69da701d19a276ae70393994f71f6734dc0179d3dec"} Mar 13 13:11:59.637034 master-0 kubenswrapper[28149]: I0313 13:11:59.636992 28149 generic.go:334] "Generic (PLEG): container finished" podID="50829c53-48ab-48a0-a68a-655b740ac823" containerID="28b4e9b9371b9831b35968c866916c675a8dfc7d7658378a79f9913173858767" exitCode=0 Mar 13 13:11:59.637338 master-0 kubenswrapper[28149]: I0313 13:11:59.637068 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-vdc8v" event={"ID":"50829c53-48ab-48a0-a68a-655b740ac823","Type":"ContainerDied","Data":"28b4e9b9371b9831b35968c866916c675a8dfc7d7658378a79f9913173858767"} Mar 13 13:11:59.641566 master-0 kubenswrapper[28149]: I0313 13:11:59.640947 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cf5cb77b5-nrxbr" event={"ID":"d6997886-21ba-4767-a3f9-82bb99c7c39a","Type":"ContainerStarted","Data":"b5a57f19d5db8dcaa5cc2f8fcced92aa060fec9baa20abcc56e29135193d0088"} Mar 13 13:11:59.817217 master-0 kubenswrapper[28149]: I0313 13:11:59.817131 28149 scope.go:117] "RemoveContainer" containerID="3cb0d27e3219bc2532b415baa34498fa0a4a28387194a658dc55600c976399d5" Mar 13 13:11:59.878307 master-0 kubenswrapper[28149]: I0313 13:11:59.878232 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ee0a2-scheduler-0"] Mar 13 13:11:59.906988 master-0 kubenswrapper[28149]: I0313 13:11:59.906283 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ee0a2-scheduler-0"] Mar 13 13:11:59.922786 master-0 kubenswrapper[28149]: I0313 13:11:59.922739 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ee0a2-scheduler-0"] Mar 13 13:11:59.923378 master-0 kubenswrapper[28149]: E0313 13:11:59.923356 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c7dd334-6af4-4528-9a21-d51e946a555b" containerName="cinder-scheduler" Mar 13 13:11:59.923378 master-0 kubenswrapper[28149]: I0313 13:11:59.923379 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c7dd334-6af4-4528-9a21-d51e946a555b" containerName="cinder-scheduler" Mar 13 13:11:59.923512 master-0 kubenswrapper[28149]: E0313 13:11:59.923415 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c7dd334-6af4-4528-9a21-d51e946a555b" containerName="probe" Mar 13 13:11:59.923512 master-0 kubenswrapper[28149]: I0313 13:11:59.923424 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c7dd334-6af4-4528-9a21-d51e946a555b" containerName="probe" Mar 13 13:11:59.923881 master-0 kubenswrapper[28149]: I0313 13:11:59.923850 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c7dd334-6af4-4528-9a21-d51e946a555b" containerName="cinder-scheduler" Mar 13 13:11:59.923974 master-0 kubenswrapper[28149]: I0313 13:11:59.923888 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c7dd334-6af4-4528-9a21-d51e946a555b" containerName="probe" Mar 13 13:11:59.926413 master-0 kubenswrapper[28149]: I0313 13:11:59.926316 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:11:59.935738 master-0 kubenswrapper[28149]: I0313 13:11:59.935679 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-scheduler-0"] Mar 13 13:11:59.954570 master-0 kubenswrapper[28149]: I0313 13:11:59.953336 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ee0a2-scheduler-config-data" Mar 13 13:12:00.014039 master-0 kubenswrapper[28149]: I0313 13:12:00.013972 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-scripts\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.014386 master-0 kubenswrapper[28149]: I0313 13:12:00.014098 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-combined-ca-bundle\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.014386 master-0 kubenswrapper[28149]: I0313 13:12:00.014145 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-config-data\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.014386 master-0 kubenswrapper[28149]: I0313 13:12:00.014266 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-config-data-custom\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.014386 master-0 kubenswrapper[28149]: I0313 13:12:00.014323 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-etc-machine-id\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.014597 master-0 kubenswrapper[28149]: I0313 13:12:00.014450 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt5bj\" (UniqueName: \"kubernetes.io/projected/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-kube-api-access-jt5bj\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.121840 master-0 kubenswrapper[28149]: I0313 13:12:00.121760 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-config-data-custom\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.125654 master-0 kubenswrapper[28149]: I0313 13:12:00.125555 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-etc-machine-id\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.140250 master-0 kubenswrapper[28149]: I0313 13:12:00.131188 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-etc-machine-id\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.140250 master-0 kubenswrapper[28149]: I0313 13:12:00.132200 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt5bj\" (UniqueName: \"kubernetes.io/projected/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-kube-api-access-jt5bj\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.140250 master-0 kubenswrapper[28149]: I0313 13:12:00.132535 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-scripts\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.140250 master-0 kubenswrapper[28149]: I0313 13:12:00.133290 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-combined-ca-bundle\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.140250 master-0 kubenswrapper[28149]: I0313 13:12:00.133634 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-config-data\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.413335 master-0 kubenswrapper[28149]: I0313 13:12:00.412334 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-combined-ca-bundle\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.435560 master-0 kubenswrapper[28149]: I0313 13:12:00.431947 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-scripts\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.455603 master-0 kubenswrapper[28149]: I0313 13:12:00.455451 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt5bj\" (UniqueName: \"kubernetes.io/projected/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-kube-api-access-jt5bj\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.869827 master-0 kubenswrapper[28149]: I0313 13:12:00.512785 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-config-data\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.869827 master-0 kubenswrapper[28149]: I0313 13:12:00.615980 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db83bac9-e722-4e4f-aad6-eba4fdcbaedb-config-data-custom\") pod \"cinder-ee0a2-scheduler-0\" (UID: \"db83bac9-e722-4e4f-aad6-eba4fdcbaedb\") " pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:00.923212 master-0 kubenswrapper[28149]: I0313 13:12:00.923048 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:01.022612 master-0 kubenswrapper[28149]: I0313 13:12:00.937555 28149 generic.go:334] "Generic (PLEG): container finished" podID="b574739d-3b02-4434-a7cd-c75404b73fd3" containerID="74f01847610f8475403d891f52537cab7feae2ea704616a6818bc2120a91880c" exitCode=0 Mar 13 13:12:01.022612 master-0 kubenswrapper[28149]: I0313 13:12:01.020266 28149 generic.go:334] "Generic (PLEG): container finished" podID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerID="d83b312654b71814018bf82aefcc44782c1b8a50ca051dac7c42951c264b572f" exitCode=143 Mar 13 13:12:01.307849 master-0 kubenswrapper[28149]: I0313 13:12:01.307722 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b494447c-js6kq" podStartSLOduration=8.30618504 podStartE2EDuration="8.30618504s" podCreationTimestamp="2026-03-13 13:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:12:01.006610391 +0000 UTC m=+1094.660075570" watchObservedRunningTime="2026-03-13 13:12:01.30618504 +0000 UTC m=+1094.959650199" Mar 13 13:12:01.312129 master-0 kubenswrapper[28149]: I0313 13:12:01.312073 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c7dd334-6af4-4528-9a21-d51e946a555b" path="/var/lib/kubelet/pods/8c7dd334-6af4-4528-9a21-d51e946a555b/volumes" Mar 13 13:12:01.313793 master-0 kubenswrapper[28149]: I0313 13:12:01.313320 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0a87-account-create-update-8qh6t" event={"ID":"b574739d-3b02-4434-a7cd-c75404b73fd3","Type":"ContainerDied","Data":"74f01847610f8475403d891f52537cab7feae2ea704616a6818bc2120a91880c"} Mar 13 13:12:01.313793 master-0 kubenswrapper[28149]: I0313 13:12:01.313378 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:12:01.313793 master-0 kubenswrapper[28149]: I0313 13:12:01.313395 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-ee0a2-api-0" Mar 13 13:12:01.313793 master-0 kubenswrapper[28149]: I0313 13:12:01.313405 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b494447c-js6kq" event={"ID":"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351","Type":"ContainerStarted","Data":"41e944b5a3a665882314566d7961c0aceb5f3ff1a1b5eccece67f417667863d7"} Mar 13 13:12:01.313793 master-0 kubenswrapper[28149]: I0313 13:12:01.313419 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-api-0" event={"ID":"7a950989-4934-4057-8476-2476cd99542e","Type":"ContainerStarted","Data":"ab1926851afbfdc418777de55c79072dff61e90d6e19340745be5e89bf0a3230"} Mar 13 13:12:01.313793 master-0 kubenswrapper[28149]: I0313 13:12:01.313428 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-699d7776-9kkdk" event={"ID":"ea4701c8-792f-4a27-948e-cc2d36ad5739","Type":"ContainerDied","Data":"d83b312654b71814018bf82aefcc44782c1b8a50ca051dac7c42951c264b572f"} Mar 13 13:12:01.376192 master-0 kubenswrapper[28149]: I0313 13:12:01.375197 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ee0a2-api-0" podStartSLOduration=11.375174278 podStartE2EDuration="11.375174278s" podCreationTimestamp="2026-03-13 13:11:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:12:01.282478242 +0000 UTC m=+1094.935943401" watchObservedRunningTime="2026-03-13 13:12:01.375174278 +0000 UTC m=+1095.028639437" Mar 13 13:12:01.796601 master-0 kubenswrapper[28149]: E0313 13:12:01.785468 28149 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda08dce85_d5c4_44e4_a3b0_e404c53b62f2.slice/crio-7bb569460a6f2eb1cef8e8cce8c284a41fd3b9d31bee3e4adf73f698d6c89770.scope\": RecentStats: unable to find data in memory cache]" Mar 13 13:12:01.819681 master-0 kubenswrapper[28149]: I0313 13:12:01.819622 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Mar 13 13:12:01.825245 master-0 kubenswrapper[28149]: I0313 13:12:01.825180 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 13 13:12:01.841868 master-0 kubenswrapper[28149]: I0313 13:12:01.841827 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Mar 13 13:12:01.842921 master-0 kubenswrapper[28149]: I0313 13:12:01.842601 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Mar 13 13:12:01.855302 master-0 kubenswrapper[28149]: I0313 13:12:01.853658 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 13 13:12:01.972175 master-0 kubenswrapper[28149]: I0313 13:12:01.971986 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/8fdaa161-cf3d-465a-8e70-c2af73f96711-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:01.972175 master-0 kubenswrapper[28149]: I0313 13:12:01.972062 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fdaa161-cf3d-465a-8e70-c2af73f96711-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:01.973096 master-0 kubenswrapper[28149]: I0313 13:12:01.972140 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fdaa161-cf3d-465a-8e70-c2af73f96711-scripts\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:01.973096 master-0 kubenswrapper[28149]: I0313 13:12:01.972578 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn2wg\" (UniqueName: \"kubernetes.io/projected/8fdaa161-cf3d-465a-8e70-c2af73f96711-kube-api-access-tn2wg\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:01.973096 master-0 kubenswrapper[28149]: I0313 13:12:01.972608 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fdaa161-cf3d-465a-8e70-c2af73f96711-config-data\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:01.973096 master-0 kubenswrapper[28149]: I0313 13:12:01.972635 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fdaa161-cf3d-465a-8e70-c2af73f96711-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:01.973096 master-0 kubenswrapper[28149]: I0313 13:12:01.972719 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8fdaa161-cf3d-465a-8e70-c2af73f96711-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:01.973096 master-0 kubenswrapper[28149]: I0313 13:12:01.972776 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6afc6f24-0a1e-4202-9272-4fc5dfde8266\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f200b0f5-b391-4e00-afc6-41cbb2815ca5\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:02.072261 master-0 kubenswrapper[28149]: I0313 13:12:02.072128 28149 generic.go:334] "Generic (PLEG): container finished" podID="a08dce85-d5c4-44e4-a3b0-e404c53b62f2" containerID="7bb569460a6f2eb1cef8e8cce8c284a41fd3b9d31bee3e4adf73f698d6c89770" exitCode=0 Mar 13 13:12:02.072750 master-0 kubenswrapper[28149]: I0313 13:12:02.072721 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" event={"ID":"a08dce85-d5c4-44e4-a3b0-e404c53b62f2","Type":"ContainerDied","Data":"7bb569460a6f2eb1cef8e8cce8c284a41fd3b9d31bee3e4adf73f698d6c89770"} Mar 13 13:12:02.092840 master-0 kubenswrapper[28149]: I0313 13:12:02.092807 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/8fdaa161-cf3d-465a-8e70-c2af73f96711-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:02.093492 master-0 kubenswrapper[28149]: I0313 13:12:02.093474 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fdaa161-cf3d-465a-8e70-c2af73f96711-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:02.094196 master-0 kubenswrapper[28149]: I0313 13:12:02.094177 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fdaa161-cf3d-465a-8e70-c2af73f96711-scripts\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:02.094347 master-0 kubenswrapper[28149]: I0313 13:12:02.094331 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn2wg\" (UniqueName: \"kubernetes.io/projected/8fdaa161-cf3d-465a-8e70-c2af73f96711-kube-api-access-tn2wg\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:02.094484 master-0 kubenswrapper[28149]: I0313 13:12:02.094472 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fdaa161-cf3d-465a-8e70-c2af73f96711-config-data\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:02.094606 master-0 kubenswrapper[28149]: I0313 13:12:02.094580 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fdaa161-cf3d-465a-8e70-c2af73f96711-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:02.094853 master-0 kubenswrapper[28149]: I0313 13:12:02.094838 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8fdaa161-cf3d-465a-8e70-c2af73f96711-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:02.095063 master-0 kubenswrapper[28149]: I0313 13:12:02.095045 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6afc6f24-0a1e-4202-9272-4fc5dfde8266\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f200b0f5-b391-4e00-afc6-41cbb2815ca5\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:02.099265 master-0 kubenswrapper[28149]: I0313 13:12:02.099226 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/8fdaa161-cf3d-465a-8e70-c2af73f96711-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:02.122366 master-0 kubenswrapper[28149]: I0313 13:12:02.109093 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8fdaa161-cf3d-465a-8e70-c2af73f96711-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:02.122940 master-0 kubenswrapper[28149]: I0313 13:12:02.122595 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fdaa161-cf3d-465a-8e70-c2af73f96711-scripts\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:02.123812 master-0 kubenswrapper[28149]: I0313 13:12:02.123408 28149 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 13:12:02.123812 master-0 kubenswrapper[28149]: I0313 13:12:02.123448 28149 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6afc6f24-0a1e-4202-9272-4fc5dfde8266\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f200b0f5-b391-4e00-afc6-41cbb2815ca5\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/efe702582dd9e8b9b51f8b2dd8b4d9b11a8adfdafd1e537f93931824aa78af7d/globalmount\"" pod="openstack/ironic-conductor-0" Mar 13 13:12:02.126193 master-0 kubenswrapper[28149]: I0313 13:12:02.126166 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fdaa161-cf3d-465a-8e70-c2af73f96711-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:02.128573 master-0 kubenswrapper[28149]: I0313 13:12:02.128544 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fdaa161-cf3d-465a-8e70-c2af73f96711-config-data\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:02.395308 master-0 kubenswrapper[28149]: I0313 13:12:02.395036 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fdaa161-cf3d-465a-8e70-c2af73f96711-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:02.441654 master-0 kubenswrapper[28149]: I0313 13:12:02.441592 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn2wg\" (UniqueName: \"kubernetes.io/projected/8fdaa161-cf3d-465a-8e70-c2af73f96711-kube-api-access-tn2wg\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:03.106520 master-0 kubenswrapper[28149]: I0313 13:12:03.105733 28149 generic.go:334] "Generic (PLEG): container finished" podID="5542dffa-edbf-4133-b7cc-2631121726dc" containerID="727d8e30c5ed7c69b55a98f2363e3c8df4840dde2b23de1158ff5b34eb4d3617" exitCode=0 Mar 13 13:12:03.287483 master-0 kubenswrapper[28149]: I0313 13:12:03.283761 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-backup-0" event={"ID":"5542dffa-edbf-4133-b7cc-2631121726dc","Type":"ContainerDied","Data":"727d8e30c5ed7c69b55a98f2363e3c8df4840dde2b23de1158ff5b34eb4d3617"} Mar 13 13:12:03.287483 master-0 kubenswrapper[28149]: I0313 13:12:03.283823 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-scheduler-0"] Mar 13 13:12:03.287483 master-0 kubenswrapper[28149]: I0313 13:12:03.283841 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" event={"ID":"a08dce85-d5c4-44e4-a3b0-e404c53b62f2","Type":"ContainerDied","Data":"d71a32d6e9f43a8aafe5e50b1c5491f10c5a6b159510944541ceeef269227306"} Mar 13 13:12:03.287483 master-0 kubenswrapper[28149]: I0313 13:12:03.283855 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d71a32d6e9f43a8aafe5e50b1c5491f10c5a6b159510944541ceeef269227306" Mar 13 13:12:03.287483 master-0 kubenswrapper[28149]: I0313 13:12:03.283865 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-vdc8v" event={"ID":"50829c53-48ab-48a0-a68a-655b740ac823","Type":"ContainerDied","Data":"574ff0a472dbbb11c12271648ce872ad524da18b99776decb15d31096f860a89"} Mar 13 13:12:03.287483 master-0 kubenswrapper[28149]: I0313 13:12:03.283890 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="574ff0a472dbbb11c12271648ce872ad524da18b99776decb15d31096f860a89" Mar 13 13:12:04.074283 master-0 kubenswrapper[28149]: I0313 13:12:04.073888 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-vdc8v" Mar 13 13:12:04.076536 master-0 kubenswrapper[28149]: I0313 13:12:04.076470 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:04.194313 master-0 kubenswrapper[28149]: I0313 13:12:04.177726 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-scheduler-0" event={"ID":"db83bac9-e722-4e4f-aad6-eba4fdcbaedb","Type":"ContainerStarted","Data":"c765aa39c4452f6d238cee76ab6587eb8244899ad9fe6d0415cf176611288c33"} Mar 13 13:12:04.196916 master-0 kubenswrapper[28149]: I0313 13:12:04.196876 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:04.198022 master-0 kubenswrapper[28149]: I0313 13:12:04.197983 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0a87-account-create-update-8qh6t" event={"ID":"b574739d-3b02-4434-a7cd-c75404b73fd3","Type":"ContainerDied","Data":"70bfb2b84c7e92744ed11ea7347cc414aa4a2a8e1df550010c923e9d46be7f66"} Mar 13 13:12:04.198123 master-0 kubenswrapper[28149]: I0313 13:12:04.198022 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70bfb2b84c7e92744ed11ea7347cc414aa4a2a8e1df550010c923e9d46be7f66" Mar 13 13:12:04.198123 master-0 kubenswrapper[28149]: I0313 13:12:04.198073 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-vdc8v" Mar 13 13:12:04.210756 master-0 kubenswrapper[28149]: I0313 13:12:04.203756 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-8b4477b4f-94nmj" Mar 13 13:12:04.256311 master-0 kubenswrapper[28149]: I0313 13:12:04.256260 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-dev\") pod \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " Mar 13 13:12:04.256311 master-0 kubenswrapper[28149]: I0313 13:12:04.256319 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-lib-modules\") pod \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " Mar 13 13:12:04.256760 master-0 kubenswrapper[28149]: I0313 13:12:04.256363 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-scripts\") pod \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " Mar 13 13:12:04.256760 master-0 kubenswrapper[28149]: I0313 13:12:04.256431 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-locks-brick\") pod \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " Mar 13 13:12:04.256760 master-0 kubenswrapper[28149]: I0313 13:12:04.256461 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-iscsi\") pod \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " Mar 13 13:12:04.256760 master-0 kubenswrapper[28149]: I0313 13:12:04.256500 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhvzh\" (UniqueName: \"kubernetes.io/projected/50829c53-48ab-48a0-a68a-655b740ac823-kube-api-access-lhvzh\") pod \"50829c53-48ab-48a0-a68a-655b740ac823\" (UID: \"50829c53-48ab-48a0-a68a-655b740ac823\") " Mar 13 13:12:04.256760 master-0 kubenswrapper[28149]: I0313 13:12:04.256526 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-lib-cinder\") pod \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " Mar 13 13:12:04.256760 master-0 kubenswrapper[28149]: I0313 13:12:04.256576 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-config-data-custom\") pod \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " Mar 13 13:12:04.256760 master-0 kubenswrapper[28149]: I0313 13:12:04.256623 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-combined-ca-bundle\") pod \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " Mar 13 13:12:04.256760 master-0 kubenswrapper[28149]: I0313 13:12:04.256692 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-machine-id\") pod \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " Mar 13 13:12:04.256760 master-0 kubenswrapper[28149]: I0313 13:12:04.256720 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-sys\") pod \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " Mar 13 13:12:04.256760 master-0 kubenswrapper[28149]: I0313 13:12:04.256743 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-locks-cinder\") pod \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " Mar 13 13:12:04.257700 master-0 kubenswrapper[28149]: I0313 13:12:04.256797 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-config-data\") pod \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " Mar 13 13:12:04.257700 master-0 kubenswrapper[28149]: I0313 13:12:04.256875 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8j68\" (UniqueName: \"kubernetes.io/projected/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-kube-api-access-j8j68\") pod \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " Mar 13 13:12:04.257700 master-0 kubenswrapper[28149]: I0313 13:12:04.256900 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-run\") pod \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " Mar 13 13:12:04.257700 master-0 kubenswrapper[28149]: I0313 13:12:04.256924 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50829c53-48ab-48a0-a68a-655b740ac823-operator-scripts\") pod \"50829c53-48ab-48a0-a68a-655b740ac823\" (UID: \"50829c53-48ab-48a0-a68a-655b740ac823\") " Mar 13 13:12:04.257700 master-0 kubenswrapper[28149]: I0313 13:12:04.256992 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-nvme\") pod \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\" (UID: \"a08dce85-d5c4-44e4-a3b0-e404c53b62f2\") " Mar 13 13:12:04.257700 master-0 kubenswrapper[28149]: I0313 13:12:04.257611 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "a08dce85-d5c4-44e4-a3b0-e404c53b62f2" (UID: "a08dce85-d5c4-44e4-a3b0-e404c53b62f2"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.257700 master-0 kubenswrapper[28149]: I0313 13:12:04.257661 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-dev" (OuterVolumeSpecName: "dev") pod "a08dce85-d5c4-44e4-a3b0-e404c53b62f2" (UID: "a08dce85-d5c4-44e4-a3b0-e404c53b62f2"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.257700 master-0 kubenswrapper[28149]: I0313 13:12:04.257680 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a08dce85-d5c4-44e4-a3b0-e404c53b62f2" (UID: "a08dce85-d5c4-44e4-a3b0-e404c53b62f2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.262945 master-0 kubenswrapper[28149]: I0313 13:12:04.262877 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a08dce85-d5c4-44e4-a3b0-e404c53b62f2" (UID: "a08dce85-d5c4-44e4-a3b0-e404c53b62f2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.262945 master-0 kubenswrapper[28149]: I0313 13:12:04.262941 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "a08dce85-d5c4-44e4-a3b0-e404c53b62f2" (UID: "a08dce85-d5c4-44e4-a3b0-e404c53b62f2"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.263273 master-0 kubenswrapper[28149]: I0313 13:12:04.262963 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "a08dce85-d5c4-44e4-a3b0-e404c53b62f2" (UID: "a08dce85-d5c4-44e4-a3b0-e404c53b62f2"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.263273 master-0 kubenswrapper[28149]: I0313 13:12:04.263105 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-scripts" (OuterVolumeSpecName: "scripts") pod "a08dce85-d5c4-44e4-a3b0-e404c53b62f2" (UID: "a08dce85-d5c4-44e4-a3b0-e404c53b62f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:04.263273 master-0 kubenswrapper[28149]: I0313 13:12:04.263217 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-sys" (OuterVolumeSpecName: "sys") pod "a08dce85-d5c4-44e4-a3b0-e404c53b62f2" (UID: "a08dce85-d5c4-44e4-a3b0-e404c53b62f2"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.263273 master-0 kubenswrapper[28149]: I0313 13:12:04.263256 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "a08dce85-d5c4-44e4-a3b0-e404c53b62f2" (UID: "a08dce85-d5c4-44e4-a3b0-e404c53b62f2"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.267946 master-0 kubenswrapper[28149]: I0313 13:12:04.267895 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "a08dce85-d5c4-44e4-a3b0-e404c53b62f2" (UID: "a08dce85-d5c4-44e4-a3b0-e404c53b62f2"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.272174 master-0 kubenswrapper[28149]: I0313 13:12:04.270853 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50829c53-48ab-48a0-a68a-655b740ac823-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50829c53-48ab-48a0-a68a-655b740ac823" (UID: "50829c53-48ab-48a0-a68a-655b740ac823"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:12:04.272174 master-0 kubenswrapper[28149]: I0313 13:12:04.271783 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-run" (OuterVolumeSpecName: "run") pod "a08dce85-d5c4-44e4-a3b0-e404c53b62f2" (UID: "a08dce85-d5c4-44e4-a3b0-e404c53b62f2"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.276463 master-0 kubenswrapper[28149]: I0313 13:12:04.275738 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0a87-account-create-update-8qh6t" Mar 13 13:12:04.358537 master-0 kubenswrapper[28149]: I0313 13:12:04.358119 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50829c53-48ab-48a0-a68a-655b740ac823-kube-api-access-lhvzh" (OuterVolumeSpecName: "kube-api-access-lhvzh") pod "50829c53-48ab-48a0-a68a-655b740ac823" (UID: "50829c53-48ab-48a0-a68a-655b740ac823"). InnerVolumeSpecName "kube-api-access-lhvzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:12:04.358537 master-0 kubenswrapper[28149]: I0313 13:12:04.358398 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhvzh\" (UniqueName: \"kubernetes.io/projected/50829c53-48ab-48a0-a68a-655b740ac823-kube-api-access-lhvzh\") pod \"50829c53-48ab-48a0-a68a-655b740ac823\" (UID: \"50829c53-48ab-48a0-a68a-655b740ac823\") " Mar 13 13:12:04.358849 master-0 kubenswrapper[28149]: W0313 13:12:04.358761 28149 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/50829c53-48ab-48a0-a68a-655b740ac823/volumes/kubernetes.io~projected/kube-api-access-lhvzh Mar 13 13:12:04.359383 master-0 kubenswrapper[28149]: I0313 13:12:04.359329 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50829c53-48ab-48a0-a68a-655b740ac823-kube-api-access-lhvzh" (OuterVolumeSpecName: "kube-api-access-lhvzh") pod "50829c53-48ab-48a0-a68a-655b740ac823" (UID: "50829c53-48ab-48a0-a68a-655b740ac823"). InnerVolumeSpecName "kube-api-access-lhvzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:12:04.364271 master-0 kubenswrapper[28149]: I0313 13:12:04.364225 28149 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-run\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.364571 master-0 kubenswrapper[28149]: I0313 13:12:04.364551 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50829c53-48ab-48a0-a68a-655b740ac823-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.364690 master-0 kubenswrapper[28149]: I0313 13:12:04.364655 28149 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.369315 master-0 kubenswrapper[28149]: I0313 13:12:04.369280 28149 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-dev\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.369639 master-0 kubenswrapper[28149]: I0313 13:12:04.369624 28149 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.369814 master-0 kubenswrapper[28149]: I0313 13:12:04.369800 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.370038 master-0 kubenswrapper[28149]: I0313 13:12:04.369965 28149 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.370224 master-0 kubenswrapper[28149]: I0313 13:12:04.370206 28149 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.370339 master-0 kubenswrapper[28149]: I0313 13:12:04.370323 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhvzh\" (UniqueName: \"kubernetes.io/projected/50829c53-48ab-48a0-a68a-655b740ac823-kube-api-access-lhvzh\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.370431 master-0 kubenswrapper[28149]: I0313 13:12:04.370417 28149 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.370552 master-0 kubenswrapper[28149]: I0313 13:12:04.370537 28149 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.370931 master-0 kubenswrapper[28149]: I0313 13:12:04.370900 28149 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-sys\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.371037 master-0 kubenswrapper[28149]: I0313 13:12:04.371017 28149 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.371159 master-0 kubenswrapper[28149]: I0313 13:12:04.369895 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-kube-api-access-j8j68" (OuterVolumeSpecName: "kube-api-access-j8j68") pod "a08dce85-d5c4-44e4-a3b0-e404c53b62f2" (UID: "a08dce85-d5c4-44e4-a3b0-e404c53b62f2"). InnerVolumeSpecName "kube-api-access-j8j68". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:12:04.425333 master-0 kubenswrapper[28149]: I0313 13:12:04.420730 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a08dce85-d5c4-44e4-a3b0-e404c53b62f2" (UID: "a08dce85-d5c4-44e4-a3b0-e404c53b62f2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:04.487190 master-0 kubenswrapper[28149]: I0313 13:12:04.486462 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b574739d-3b02-4434-a7cd-c75404b73fd3-operator-scripts\") pod \"b574739d-3b02-4434-a7cd-c75404b73fd3\" (UID: \"b574739d-3b02-4434-a7cd-c75404b73fd3\") " Mar 13 13:12:04.487190 master-0 kubenswrapper[28149]: I0313 13:12:04.486919 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vphwj\" (UniqueName: \"kubernetes.io/projected/b574739d-3b02-4434-a7cd-c75404b73fd3-kube-api-access-vphwj\") pod \"b574739d-3b02-4434-a7cd-c75404b73fd3\" (UID: \"b574739d-3b02-4434-a7cd-c75404b73fd3\") " Mar 13 13:12:04.488090 master-0 kubenswrapper[28149]: I0313 13:12:04.488037 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b574739d-3b02-4434-a7cd-c75404b73fd3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b574739d-3b02-4434-a7cd-c75404b73fd3" (UID: "b574739d-3b02-4434-a7cd-c75404b73fd3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:12:04.511281 master-0 kubenswrapper[28149]: I0313 13:12:04.503524 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b574739d-3b02-4434-a7cd-c75404b73fd3-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.511281 master-0 kubenswrapper[28149]: I0313 13:12:04.503593 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8j68\" (UniqueName: \"kubernetes.io/projected/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-kube-api-access-j8j68\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.511281 master-0 kubenswrapper[28149]: I0313 13:12:04.503614 28149 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.517801 master-0 kubenswrapper[28149]: I0313 13:12:04.512034 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b574739d-3b02-4434-a7cd-c75404b73fd3-kube-api-access-vphwj" (OuterVolumeSpecName: "kube-api-access-vphwj") pod "b574739d-3b02-4434-a7cd-c75404b73fd3" (UID: "b574739d-3b02-4434-a7cd-c75404b73fd3"). InnerVolumeSpecName "kube-api-access-vphwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:12:04.536810 master-0 kubenswrapper[28149]: I0313 13:12:04.536672 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6afc6f24-0a1e-4202-9272-4fc5dfde8266\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f200b0f5-b391-4e00-afc6-41cbb2815ca5\") pod \"ironic-conductor-0\" (UID: \"8fdaa161-cf3d-465a-8e70-c2af73f96711\") " pod="openstack/ironic-conductor-0" Mar 13 13:12:04.540613 master-0 kubenswrapper[28149]: I0313 13:12:04.540428 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a08dce85-d5c4-44e4-a3b0-e404c53b62f2" (UID: "a08dce85-d5c4-44e4-a3b0-e404c53b62f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:04.541702 master-0 kubenswrapper[28149]: I0313 13:12:04.541555 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:04.614591 master-0 kubenswrapper[28149]: I0313 13:12:04.613874 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vphwj\" (UniqueName: \"kubernetes.io/projected/b574739d-3b02-4434-a7cd-c75404b73fd3-kube-api-access-vphwj\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.614591 master-0 kubenswrapper[28149]: I0313 13:12:04.613908 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.622382 master-0 kubenswrapper[28149]: I0313 13:12:04.621749 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 13 13:12:04.644366 master-0 kubenswrapper[28149]: I0313 13:12:04.642617 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:12:04.650492 master-0 kubenswrapper[28149]: I0313 13:12:04.650055 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-config-data" (OuterVolumeSpecName: "config-data") pod "a08dce85-d5c4-44e4-a3b0-e404c53b62f2" (UID: "a08dce85-d5c4-44e4-a3b0-e404c53b62f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:04.672971 master-0 kubenswrapper[28149]: I0313 13:12:04.671976 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-58d4ff778c-wbbt4"] Mar 13 13:12:04.673288 master-0 kubenswrapper[28149]: E0313 13:12:04.673240 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b574739d-3b02-4434-a7cd-c75404b73fd3" containerName="mariadb-account-create-update" Mar 13 13:12:04.673288 master-0 kubenswrapper[28149]: I0313 13:12:04.673280 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="b574739d-3b02-4434-a7cd-c75404b73fd3" containerName="mariadb-account-create-update" Mar 13 13:12:04.673418 master-0 kubenswrapper[28149]: E0313 13:12:04.673304 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5542dffa-edbf-4133-b7cc-2631121726dc" containerName="cinder-backup" Mar 13 13:12:04.673418 master-0 kubenswrapper[28149]: I0313 13:12:04.673312 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="5542dffa-edbf-4133-b7cc-2631121726dc" containerName="cinder-backup" Mar 13 13:12:04.673418 master-0 kubenswrapper[28149]: E0313 13:12:04.673347 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08dce85-d5c4-44e4-a3b0-e404c53b62f2" containerName="cinder-volume" Mar 13 13:12:04.673418 master-0 kubenswrapper[28149]: I0313 13:12:04.673354 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08dce85-d5c4-44e4-a3b0-e404c53b62f2" containerName="cinder-volume" Mar 13 13:12:04.673418 master-0 kubenswrapper[28149]: E0313 13:12:04.673386 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5542dffa-edbf-4133-b7cc-2631121726dc" containerName="probe" Mar 13 13:12:04.673418 master-0 kubenswrapper[28149]: I0313 13:12:04.673392 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="5542dffa-edbf-4133-b7cc-2631121726dc" containerName="probe" Mar 13 13:12:04.673418 master-0 kubenswrapper[28149]: E0313 13:12:04.673411 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50829c53-48ab-48a0-a68a-655b740ac823" containerName="mariadb-database-create" Mar 13 13:12:04.673418 master-0 kubenswrapper[28149]: I0313 13:12:04.673417 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="50829c53-48ab-48a0-a68a-655b740ac823" containerName="mariadb-database-create" Mar 13 13:12:04.673418 master-0 kubenswrapper[28149]: E0313 13:12:04.673427 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08dce85-d5c4-44e4-a3b0-e404c53b62f2" containerName="probe" Mar 13 13:12:04.673757 master-0 kubenswrapper[28149]: I0313 13:12:04.673436 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08dce85-d5c4-44e4-a3b0-e404c53b62f2" containerName="probe" Mar 13 13:12:04.673757 master-0 kubenswrapper[28149]: I0313 13:12:04.673686 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="a08dce85-d5c4-44e4-a3b0-e404c53b62f2" containerName="cinder-volume" Mar 13 13:12:04.673757 master-0 kubenswrapper[28149]: I0313 13:12:04.673715 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="b574739d-3b02-4434-a7cd-c75404b73fd3" containerName="mariadb-account-create-update" Mar 13 13:12:04.673757 master-0 kubenswrapper[28149]: I0313 13:12:04.673726 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="5542dffa-edbf-4133-b7cc-2631121726dc" containerName="probe" Mar 13 13:12:04.673757 master-0 kubenswrapper[28149]: I0313 13:12:04.673744 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="5542dffa-edbf-4133-b7cc-2631121726dc" containerName="cinder-backup" Mar 13 13:12:04.673757 master-0 kubenswrapper[28149]: I0313 13:12:04.673759 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="50829c53-48ab-48a0-a68a-655b740ac823" containerName="mariadb-database-create" Mar 13 13:12:04.674018 master-0 kubenswrapper[28149]: I0313 13:12:04.673779 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="a08dce85-d5c4-44e4-a3b0-e404c53b62f2" containerName="probe" Mar 13 13:12:04.676393 master-0 kubenswrapper[28149]: I0313 13:12:04.675895 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.679668 master-0 kubenswrapper[28149]: I0313 13:12:04.678378 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-699d7776-9kkdk" podUID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerName="placement-api" probeResult="failure" output="Get \"https://10.128.0.220:8778/\": read tcp 10.128.0.2:46806->10.128.0.220:8778: read: connection reset by peer" Mar 13 13:12:04.679668 master-0 kubenswrapper[28149]: I0313 13:12:04.678602 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-699d7776-9kkdk" podUID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerName="placement-log" probeResult="failure" output="Get \"https://10.128.0.220:8778/\": read tcp 10.128.0.2:46818->10.128.0.220:8778: read: connection reset by peer" Mar 13 13:12:04.679668 master-0 kubenswrapper[28149]: I0313 13:12:04.678801 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Mar 13 13:12:04.679668 master-0 kubenswrapper[28149]: I0313 13:12:04.679001 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.752863 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-config-data-custom\") pod \"5542dffa-edbf-4133-b7cc-2631121726dc\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.752969 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-nvme\") pod \"5542dffa-edbf-4133-b7cc-2631121726dc\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.753006 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-machine-id\") pod \"5542dffa-edbf-4133-b7cc-2631121726dc\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.753052 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-lib-cinder\") pod \"5542dffa-edbf-4133-b7cc-2631121726dc\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.753090 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-sys\") pod \"5542dffa-edbf-4133-b7cc-2631121726dc\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.753163 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-lib-modules\") pod \"5542dffa-edbf-4133-b7cc-2631121726dc\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.753209 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-locks-brick\") pod \"5542dffa-edbf-4133-b7cc-2631121726dc\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.753254 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-run\") pod \"5542dffa-edbf-4133-b7cc-2631121726dc\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.753319 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-locks-cinder\") pod \"5542dffa-edbf-4133-b7cc-2631121726dc\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.753395 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "5542dffa-edbf-4133-b7cc-2631121726dc" (UID: "5542dffa-edbf-4133-b7cc-2631121726dc"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.753435 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-sys" (OuterVolumeSpecName: "sys") pod "5542dffa-edbf-4133-b7cc-2631121726dc" (UID: "5542dffa-edbf-4133-b7cc-2631121726dc"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.753461 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5542dffa-edbf-4133-b7cc-2631121726dc" (UID: "5542dffa-edbf-4133-b7cc-2631121726dc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.753484 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "5542dffa-edbf-4133-b7cc-2631121726dc" (UID: "5542dffa-edbf-4133-b7cc-2631121726dc"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.753492 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "5542dffa-edbf-4133-b7cc-2631121726dc" (UID: "5542dffa-edbf-4133-b7cc-2631121726dc"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.753511 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "5542dffa-edbf-4133-b7cc-2631121726dc" (UID: "5542dffa-edbf-4133-b7cc-2631121726dc"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.753550 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5542dffa-edbf-4133-b7cc-2631121726dc" (UID: "5542dffa-edbf-4133-b7cc-2631121726dc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.753532 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-run" (OuterVolumeSpecName: "run") pod "5542dffa-edbf-4133-b7cc-2631121726dc" (UID: "5542dffa-edbf-4133-b7cc-2631121726dc"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.755776 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-combined-ca-bundle\") pod \"5542dffa-edbf-4133-b7cc-2631121726dc\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.755858 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-scripts\") pod \"5542dffa-edbf-4133-b7cc-2631121726dc\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.755920 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q5vm\" (UniqueName: \"kubernetes.io/projected/5542dffa-edbf-4133-b7cc-2631121726dc-kube-api-access-9q5vm\") pod \"5542dffa-edbf-4133-b7cc-2631121726dc\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.755962 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-dev\") pod \"5542dffa-edbf-4133-b7cc-2631121726dc\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.755993 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-iscsi\") pod \"5542dffa-edbf-4133-b7cc-2631121726dc\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.756031 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-config-data\") pod \"5542dffa-edbf-4133-b7cc-2631121726dc\" (UID: \"5542dffa-edbf-4133-b7cc-2631121726dc\") " Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.756498 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e28006c9-0e25-4845-abae-e6407165a9dc-logs\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.756541 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e28006c9-0e25-4845-abae-e6407165a9dc-config-data-merged\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.760231 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-dev" (OuterVolumeSpecName: "dev") pod "5542dffa-edbf-4133-b7cc-2631121726dc" (UID: "5542dffa-edbf-4133-b7cc-2631121726dc"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.760658 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "5542dffa-edbf-4133-b7cc-2631121726dc" (UID: "5542dffa-edbf-4133-b7cc-2631121726dc"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.760821 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-internal-tls-certs\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.761068 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m58z7\" (UniqueName: \"kubernetes.io/projected/e28006c9-0e25-4845-abae-e6407165a9dc-kube-api-access-m58z7\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.761176 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-scripts\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.761309 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-config-data\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.761523 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e28006c9-0e25-4845-abae-e6407165a9dc-etc-podinfo\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.761567 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-combined-ca-bundle\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.761736 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-config-data-custom\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.761860 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-public-tls-certs\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.762035 28149 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.762160 28149 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-run\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.762742 28149 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.764113 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a08dce85-d5c4-44e4-a3b0-e404c53b62f2-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.764142 28149 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-dev\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.764176 28149 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.764190 28149 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.764203 28149 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.764217 28149 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.764229 28149 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-sys\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.764244 28149 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5542dffa-edbf-4133-b7cc-2631121726dc-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.781010 master-0 kubenswrapper[28149]: I0313 13:12:04.774862 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5542dffa-edbf-4133-b7cc-2631121726dc-kube-api-access-9q5vm" (OuterVolumeSpecName: "kube-api-access-9q5vm") pod "5542dffa-edbf-4133-b7cc-2631121726dc" (UID: "5542dffa-edbf-4133-b7cc-2631121726dc"). InnerVolumeSpecName "kube-api-access-9q5vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:12:04.814172 master-0 kubenswrapper[28149]: I0313 13:12:04.808982 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-scripts" (OuterVolumeSpecName: "scripts") pod "5542dffa-edbf-4133-b7cc-2631121726dc" (UID: "5542dffa-edbf-4133-b7cc-2631121726dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:04.814172 master-0 kubenswrapper[28149]: I0313 13:12:04.809081 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5542dffa-edbf-4133-b7cc-2631121726dc" (UID: "5542dffa-edbf-4133-b7cc-2631121726dc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:04.814172 master-0 kubenswrapper[28149]: I0313 13:12:04.810260 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-58d4ff778c-wbbt4"] Mar 13 13:12:04.869255 master-0 kubenswrapper[28149]: I0313 13:12:04.867725 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-scripts\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.869255 master-0 kubenswrapper[28149]: I0313 13:12:04.867875 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-config-data\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.869255 master-0 kubenswrapper[28149]: I0313 13:12:04.868039 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e28006c9-0e25-4845-abae-e6407165a9dc-etc-podinfo\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.869255 master-0 kubenswrapper[28149]: I0313 13:12:04.868085 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-combined-ca-bundle\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.869255 master-0 kubenswrapper[28149]: I0313 13:12:04.868136 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-config-data-custom\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.869255 master-0 kubenswrapper[28149]: I0313 13:12:04.868222 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-public-tls-certs\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.869255 master-0 kubenswrapper[28149]: I0313 13:12:04.868321 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e28006c9-0e25-4845-abae-e6407165a9dc-logs\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.869255 master-0 kubenswrapper[28149]: I0313 13:12:04.868358 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e28006c9-0e25-4845-abae-e6407165a9dc-config-data-merged\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.869255 master-0 kubenswrapper[28149]: I0313 13:12:04.868403 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-internal-tls-certs\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.869255 master-0 kubenswrapper[28149]: I0313 13:12:04.868521 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m58z7\" (UniqueName: \"kubernetes.io/projected/e28006c9-0e25-4845-abae-e6407165a9dc-kube-api-access-m58z7\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.869255 master-0 kubenswrapper[28149]: I0313 13:12:04.868653 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.869255 master-0 kubenswrapper[28149]: I0313 13:12:04.868682 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q5vm\" (UniqueName: \"kubernetes.io/projected/5542dffa-edbf-4133-b7cc-2631121726dc-kube-api-access-9q5vm\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.869255 master-0 kubenswrapper[28149]: I0313 13:12:04.868696 28149 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:04.870013 master-0 kubenswrapper[28149]: I0313 13:12:04.869665 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e28006c9-0e25-4845-abae-e6407165a9dc-config-data-merged\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.870013 master-0 kubenswrapper[28149]: I0313 13:12:04.869943 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e28006c9-0e25-4845-abae-e6407165a9dc-logs\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.884339 master-0 kubenswrapper[28149]: I0313 13:12:04.872493 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-scripts\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.884339 master-0 kubenswrapper[28149]: I0313 13:12:04.881854 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-internal-tls-certs\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.884339 master-0 kubenswrapper[28149]: I0313 13:12:04.882420 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e28006c9-0e25-4845-abae-e6407165a9dc-etc-podinfo\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.884941 master-0 kubenswrapper[28149]: I0313 13:12:04.884907 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-config-data-custom\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.894832 master-0 kubenswrapper[28149]: I0313 13:12:04.889085 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-public-tls-certs\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.900264 master-0 kubenswrapper[28149]: I0313 13:12:04.899940 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ee0a2-volume-lvm-iscsi-0"] Mar 13 13:12:04.904041 master-0 kubenswrapper[28149]: I0313 13:12:04.903992 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-config-data\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.936654 master-0 kubenswrapper[28149]: I0313 13:12:04.936503 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28006c9-0e25-4845-abae-e6407165a9dc-combined-ca-bundle\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.939505 master-0 kubenswrapper[28149]: I0313 13:12:04.937903 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m58z7\" (UniqueName: \"kubernetes.io/projected/e28006c9-0e25-4845-abae-e6407165a9dc-kube-api-access-m58z7\") pod \"ironic-58d4ff778c-wbbt4\" (UID: \"e28006c9-0e25-4845-abae-e6407165a9dc\") " pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:04.997010 master-0 kubenswrapper[28149]: I0313 13:12:04.996950 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5542dffa-edbf-4133-b7cc-2631121726dc" (UID: "5542dffa-edbf-4133-b7cc-2631121726dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:05.004011 master-0 kubenswrapper[28149]: I0313 13:12:05.003919 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ee0a2-volume-lvm-iscsi-0"] Mar 13 13:12:05.021800 master-0 kubenswrapper[28149]: I0313 13:12:05.021548 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ee0a2-volume-lvm-iscsi-0"] Mar 13 13:12:05.025229 master-0 kubenswrapper[28149]: I0313 13:12:05.024530 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.028073 master-0 kubenswrapper[28149]: I0313 13:12:05.026648 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ee0a2-volume-lvm-iscsi-config-data" Mar 13 13:12:05.033872 master-0 kubenswrapper[28149]: I0313 13:12:05.031717 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-config-data" (OuterVolumeSpecName: "config-data") pod "5542dffa-edbf-4133-b7cc-2631121726dc" (UID: "5542dffa-edbf-4133-b7cc-2631121726dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:05.052115 master-0 kubenswrapper[28149]: I0313 13:12:05.050128 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-volume-lvm-iscsi-0"] Mar 13 13:12:05.077235 master-0 kubenswrapper[28149]: I0313 13:12:05.076343 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:05.077235 master-0 kubenswrapper[28149]: I0313 13:12:05.076399 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5542dffa-edbf-4133-b7cc-2631121726dc-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:05.113814 master-0 kubenswrapper[28149]: I0313 13:12:05.110553 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:05.192242 master-0 kubenswrapper[28149]: I0313 13:12:05.178701 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-etc-iscsi\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.192242 master-0 kubenswrapper[28149]: I0313 13:12:05.178812 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-lib-modules\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.192242 master-0 kubenswrapper[28149]: I0313 13:12:05.178861 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-dev\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.192242 master-0 kubenswrapper[28149]: I0313 13:12:05.178890 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x7t7\" (UniqueName: \"kubernetes.io/projected/b0b0e08b-0a29-40e5-9cd6-3609aa630650-kube-api-access-4x7t7\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.192242 master-0 kubenswrapper[28149]: I0313 13:12:05.178969 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b0e08b-0a29-40e5-9cd6-3609aa630650-combined-ca-bundle\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.192242 master-0 kubenswrapper[28149]: I0313 13:12:05.179051 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b0e08b-0a29-40e5-9cd6-3609aa630650-config-data\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.192242 master-0 kubenswrapper[28149]: I0313 13:12:05.179126 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-etc-nvme\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.192242 master-0 kubenswrapper[28149]: I0313 13:12:05.179160 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-sys\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.192242 master-0 kubenswrapper[28149]: I0313 13:12:05.179203 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0b0e08b-0a29-40e5-9cd6-3609aa630650-scripts\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.192242 master-0 kubenswrapper[28149]: I0313 13:12:05.179226 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-run\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.192242 master-0 kubenswrapper[28149]: I0313 13:12:05.179251 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-var-locks-brick\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.192242 master-0 kubenswrapper[28149]: I0313 13:12:05.179311 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0b0e08b-0a29-40e5-9cd6-3609aa630650-config-data-custom\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.192242 master-0 kubenswrapper[28149]: I0313 13:12:05.179325 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-var-locks-cinder\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.192242 master-0 kubenswrapper[28149]: I0313 13:12:05.179440 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-var-lib-cinder\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.192242 master-0 kubenswrapper[28149]: I0313 13:12:05.179499 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-etc-machine-id\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.224163 master-0 kubenswrapper[28149]: I0313 13:12:05.218484 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0a87-account-create-update-8qh6t" Mar 13 13:12:05.224163 master-0 kubenswrapper[28149]: I0313 13:12:05.219290 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.224163 master-0 kubenswrapper[28149]: I0313 13:12:05.221167 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-backup-0" event={"ID":"5542dffa-edbf-4133-b7cc-2631121726dc","Type":"ContainerDied","Data":"6080e0f6c170fbe145f0f4df0fdc2681a4dafe0493c86efc6339e21cb7c4b3bc"} Mar 13 13:12:05.224163 master-0 kubenswrapper[28149]: I0313 13:12:05.221247 28149 scope.go:117] "RemoveContainer" containerID="8d866a757c903f73361a05a85d828507b5d29c24c685ef09179cf6eb95a3969f" Mar 13 13:12:05.282761 master-0 kubenswrapper[28149]: I0313 13:12:05.282699 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b0e08b-0a29-40e5-9cd6-3609aa630650-combined-ca-bundle\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.282941 master-0 kubenswrapper[28149]: I0313 13:12:05.282841 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b0e08b-0a29-40e5-9cd6-3609aa630650-config-data\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.282941 master-0 kubenswrapper[28149]: I0313 13:12:05.282927 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-etc-nvme\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.283062 master-0 kubenswrapper[28149]: I0313 13:12:05.282947 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-sys\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.283118 master-0 kubenswrapper[28149]: I0313 13:12:05.283065 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-etc-nvme\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.283118 master-0 kubenswrapper[28149]: I0313 13:12:05.283110 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0b0e08b-0a29-40e5-9cd6-3609aa630650-scripts\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.283354 master-0 kubenswrapper[28149]: I0313 13:12:05.283315 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-run\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.283411 master-0 kubenswrapper[28149]: I0313 13:12:05.283375 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-var-locks-brick\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.283448 master-0 kubenswrapper[28149]: I0313 13:12:05.283411 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0b0e08b-0a29-40e5-9cd6-3609aa630650-config-data-custom\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.283448 master-0 kubenswrapper[28149]: I0313 13:12:05.283434 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-var-locks-cinder\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.283549 master-0 kubenswrapper[28149]: I0313 13:12:05.283524 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-var-lib-cinder\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.283653 master-0 kubenswrapper[28149]: I0313 13:12:05.283618 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-etc-machine-id\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.283717 master-0 kubenswrapper[28149]: I0313 13:12:05.283685 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-etc-iscsi\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.283776 master-0 kubenswrapper[28149]: I0313 13:12:05.283747 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-lib-modules\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.283859 master-0 kubenswrapper[28149]: I0313 13:12:05.283836 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-dev\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.283942 master-0 kubenswrapper[28149]: I0313 13:12:05.283919 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x7t7\" (UniqueName: \"kubernetes.io/projected/b0b0e08b-0a29-40e5-9cd6-3609aa630650-kube-api-access-4x7t7\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.290719 master-0 kubenswrapper[28149]: I0313 13:12:05.287848 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-var-locks-cinder\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.290719 master-0 kubenswrapper[28149]: I0313 13:12:05.287957 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-sys\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.290719 master-0 kubenswrapper[28149]: I0313 13:12:05.289517 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-run\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.290719 master-0 kubenswrapper[28149]: I0313 13:12:05.289630 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-var-locks-brick\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.290719 master-0 kubenswrapper[28149]: I0313 13:12:05.290688 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-etc-iscsi\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.291129 master-0 kubenswrapper[28149]: I0313 13:12:05.290776 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-var-lib-cinder\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.291129 master-0 kubenswrapper[28149]: I0313 13:12:05.290810 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-etc-machine-id\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.291129 master-0 kubenswrapper[28149]: I0313 13:12:05.290839 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-lib-modules\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.291129 master-0 kubenswrapper[28149]: I0313 13:12:05.290870 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b0b0e08b-0a29-40e5-9cd6-3609aa630650-dev\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.294354 master-0 kubenswrapper[28149]: I0313 13:12:05.292037 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b0e08b-0a29-40e5-9cd6-3609aa630650-config-data\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.294354 master-0 kubenswrapper[28149]: I0313 13:12:05.292521 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b0e08b-0a29-40e5-9cd6-3609aa630650-combined-ca-bundle\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.312307 master-0 kubenswrapper[28149]: I0313 13:12:05.307601 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0b0e08b-0a29-40e5-9cd6-3609aa630650-config-data-custom\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.312307 master-0 kubenswrapper[28149]: I0313 13:12:05.309362 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0b0e08b-0a29-40e5-9cd6-3609aa630650-scripts\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.476396 master-0 kubenswrapper[28149]: I0313 13:12:05.446799 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x7t7\" (UniqueName: \"kubernetes.io/projected/b0b0e08b-0a29-40e5-9cd6-3609aa630650-kube-api-access-4x7t7\") pod \"cinder-ee0a2-volume-lvm-iscsi-0\" (UID: \"b0b0e08b-0a29-40e5-9cd6-3609aa630650\") " pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.476396 master-0 kubenswrapper[28149]: I0313 13:12:05.472663 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:05.497357 master-0 kubenswrapper[28149]: I0313 13:12:05.497233 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ee0a2-backup-0"] Mar 13 13:12:05.534010 master-0 kubenswrapper[28149]: I0313 13:12:05.515387 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ee0a2-backup-0"] Mar 13 13:12:05.593102 master-0 kubenswrapper[28149]: I0313 13:12:05.589107 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ee0a2-backup-0"] Mar 13 13:12:05.594620 master-0 kubenswrapper[28149]: I0313 13:12:05.594340 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.599834 master-0 kubenswrapper[28149]: I0313 13:12:05.597177 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ee0a2-backup-config-data" Mar 13 13:12:05.630240 master-0 kubenswrapper[28149]: I0313 13:12:05.629921 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-backup-0"] Mar 13 13:12:05.654292 master-0 kubenswrapper[28149]: I0313 13:12:05.645166 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j55nz\" (UniqueName: \"kubernetes.io/projected/aeef3e73-d29d-456c-a41a-8f478df6e975-kube-api-access-j55nz\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.654292 master-0 kubenswrapper[28149]: I0313 13:12:05.645249 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeef3e73-d29d-456c-a41a-8f478df6e975-scripts\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.654292 master-0 kubenswrapper[28149]: I0313 13:12:05.645321 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeef3e73-d29d-456c-a41a-8f478df6e975-combined-ca-bundle\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.654292 master-0 kubenswrapper[28149]: I0313 13:12:05.645348 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-etc-machine-id\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.654292 master-0 kubenswrapper[28149]: I0313 13:12:05.645391 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aeef3e73-d29d-456c-a41a-8f478df6e975-config-data-custom\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.654292 master-0 kubenswrapper[28149]: I0313 13:12:05.645431 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-dev\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.654292 master-0 kubenswrapper[28149]: I0313 13:12:05.645521 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-var-locks-brick\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.654292 master-0 kubenswrapper[28149]: I0313 13:12:05.645549 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-run\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.654292 master-0 kubenswrapper[28149]: I0313 13:12:05.645638 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-sys\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.654292 master-0 kubenswrapper[28149]: I0313 13:12:05.645664 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-var-locks-cinder\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.654292 master-0 kubenswrapper[28149]: I0313 13:12:05.645752 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeef3e73-d29d-456c-a41a-8f478df6e975-config-data\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.654292 master-0 kubenswrapper[28149]: I0313 13:12:05.645773 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-etc-iscsi\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.654292 master-0 kubenswrapper[28149]: I0313 13:12:05.645811 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-etc-nvme\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.654292 master-0 kubenswrapper[28149]: I0313 13:12:05.645908 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-var-lib-cinder\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.654292 master-0 kubenswrapper[28149]: I0313 13:12:05.646008 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-lib-modules\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.748062 master-0 kubenswrapper[28149]: I0313 13:12:05.747933 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-sys\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.748481 master-0 kubenswrapper[28149]: I0313 13:12:05.748463 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-var-locks-cinder\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.748619 master-0 kubenswrapper[28149]: I0313 13:12:05.748604 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeef3e73-d29d-456c-a41a-8f478df6e975-config-data\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.748712 master-0 kubenswrapper[28149]: I0313 13:12:05.748698 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-etc-iscsi\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.748818 master-0 kubenswrapper[28149]: I0313 13:12:05.748805 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-etc-nvme\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.748975 master-0 kubenswrapper[28149]: I0313 13:12:05.748952 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-var-lib-cinder\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.749091 master-0 kubenswrapper[28149]: I0313 13:12:05.749076 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-lib-modules\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.749217 master-0 kubenswrapper[28149]: I0313 13:12:05.749203 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j55nz\" (UniqueName: \"kubernetes.io/projected/aeef3e73-d29d-456c-a41a-8f478df6e975-kube-api-access-j55nz\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.749330 master-0 kubenswrapper[28149]: I0313 13:12:05.749317 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeef3e73-d29d-456c-a41a-8f478df6e975-scripts\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.749443 master-0 kubenswrapper[28149]: I0313 13:12:05.749430 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeef3e73-d29d-456c-a41a-8f478df6e975-combined-ca-bundle\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.749523 master-0 kubenswrapper[28149]: I0313 13:12:05.749508 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-etc-machine-id\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.749624 master-0 kubenswrapper[28149]: I0313 13:12:05.749610 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aeef3e73-d29d-456c-a41a-8f478df6e975-config-data-custom\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.749710 master-0 kubenswrapper[28149]: I0313 13:12:05.749697 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-dev\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.749840 master-0 kubenswrapper[28149]: I0313 13:12:05.749826 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-var-locks-brick\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.749932 master-0 kubenswrapper[28149]: I0313 13:12:05.749919 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-run\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.750104 master-0 kubenswrapper[28149]: I0313 13:12:05.750091 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-run\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.759679 master-0 kubenswrapper[28149]: I0313 13:12:05.750156 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-lib-modules\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.759857 master-0 kubenswrapper[28149]: I0313 13:12:05.750194 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-sys\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.759976 master-0 kubenswrapper[28149]: I0313 13:12:05.750235 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-var-locks-cinder\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.760064 master-0 kubenswrapper[28149]: I0313 13:12:05.751217 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-etc-nvme\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.760140 master-0 kubenswrapper[28149]: I0313 13:12:05.751239 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-etc-iscsi\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.760230 master-0 kubenswrapper[28149]: I0313 13:12:05.751258 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-etc-machine-id\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.760317 master-0 kubenswrapper[28149]: I0313 13:12:05.752383 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-var-lib-cinder\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.760408 master-0 kubenswrapper[28149]: I0313 13:12:05.752454 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-dev\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.771051 master-0 kubenswrapper[28149]: I0313 13:12:05.757295 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/aeef3e73-d29d-456c-a41a-8f478df6e975-var-locks-brick\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.771343 master-0 kubenswrapper[28149]: I0313 13:12:05.757444 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aeef3e73-d29d-456c-a41a-8f478df6e975-config-data-custom\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.771436 master-0 kubenswrapper[28149]: I0313 13:12:05.765097 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeef3e73-d29d-456c-a41a-8f478df6e975-scripts\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.783107 master-0 kubenswrapper[28149]: I0313 13:12:05.783065 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeef3e73-d29d-456c-a41a-8f478df6e975-combined-ca-bundle\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.783431 master-0 kubenswrapper[28149]: I0313 13:12:05.783377 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeef3e73-d29d-456c-a41a-8f478df6e975-config-data\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.789201 master-0 kubenswrapper[28149]: I0313 13:12:05.788065 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j55nz\" (UniqueName: \"kubernetes.io/projected/aeef3e73-d29d-456c-a41a-8f478df6e975-kube-api-access-j55nz\") pod \"cinder-ee0a2-backup-0\" (UID: \"aeef3e73-d29d-456c-a41a-8f478df6e975\") " pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:05.995495 master-0 kubenswrapper[28149]: I0313 13:12:05.995401 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:12:06.047378 master-0 kubenswrapper[28149]: I0313 13:12:06.047305 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:06.175354 master-0 kubenswrapper[28149]: I0313 13:12:06.174126 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f9957b47c-swh76"] Mar 13 13:12:06.175354 master-0 kubenswrapper[28149]: I0313 13:12:06.174378 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f9957b47c-swh76" podUID="1cb05e83-a753-4f24-b578-d7b8996d39b7" containerName="dnsmasq-dns" containerID="cri-o://65e50548c2a68ea8d377a853fa87803a8ba573510bcf5bbc4b25f130ca9f0d8b" gracePeriod=10 Mar 13 13:12:06.288731 master-0 kubenswrapper[28149]: I0313 13:12:06.288680 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-scheduler-0" event={"ID":"db83bac9-e722-4e4f-aad6-eba4fdcbaedb","Type":"ContainerStarted","Data":"3af212b612fff67cc418287f3b38f3ef516620d84d48479036f950bca08759d0"} Mar 13 13:12:06.688171 master-0 kubenswrapper[28149]: I0313 13:12:06.687468 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-f9957b47c-swh76" podUID="1cb05e83-a753-4f24-b578-d7b8996d39b7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.226:5353: connect: connection refused" Mar 13 13:12:06.733745 master-0 kubenswrapper[28149]: I0313 13:12:06.733626 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5542dffa-edbf-4133-b7cc-2631121726dc" path="/var/lib/kubelet/pods/5542dffa-edbf-4133-b7cc-2631121726dc/volumes" Mar 13 13:12:06.737202 master-0 kubenswrapper[28149]: I0313 13:12:06.737123 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a08dce85-d5c4-44e4-a3b0-e404c53b62f2" path="/var/lib/kubelet/pods/a08dce85-d5c4-44e4-a3b0-e404c53b62f2/volumes" Mar 13 13:12:07.704212 master-0 kubenswrapper[28149]: I0313 13:12:07.702510 28149 scope.go:117] "RemoveContainer" containerID="727d8e30c5ed7c69b55a98f2363e3c8df4840dde2b23de1158ff5b34eb4d3617" Mar 13 13:12:08.339238 master-0 kubenswrapper[28149]: I0313 13:12:08.329433 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-78f868d9fc-8d9cf" Mar 13 13:12:08.456420 master-0 kubenswrapper[28149]: I0313 13:12:08.427997 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 13 13:12:08.456420 master-0 kubenswrapper[28149]: I0313 13:12:08.433754 28149 generic.go:334] "Generic (PLEG): container finished" podID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerID="0de68570e0c25f556b25a0d514a40bf6e6b23fdd944c31b42c9a5dee0c0f377f" exitCode=0 Mar 13 13:12:08.472010 master-0 kubenswrapper[28149]: I0313 13:12:08.464631 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-699d7776-9kkdk" event={"ID":"ea4701c8-792f-4a27-948e-cc2d36ad5739","Type":"ContainerDied","Data":"0de68570e0c25f556b25a0d514a40bf6e6b23fdd944c31b42c9a5dee0c0f377f"} Mar 13 13:12:08.472010 master-0 kubenswrapper[28149]: I0313 13:12:08.464789 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 13:12:08.475249 master-0 kubenswrapper[28149]: I0313 13:12:08.473586 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 13 13:12:08.475249 master-0 kubenswrapper[28149]: I0313 13:12:08.473845 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 13 13:12:08.501345 master-0 kubenswrapper[28149]: I0313 13:12:08.483882 28149 generic.go:334] "Generic (PLEG): container finished" podID="1cb05e83-a753-4f24-b578-d7b8996d39b7" containerID="65e50548c2a68ea8d377a853fa87803a8ba573510bcf5bbc4b25f130ca9f0d8b" exitCode=0 Mar 13 13:12:08.501345 master-0 kubenswrapper[28149]: I0313 13:12:08.483934 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f9957b47c-swh76" event={"ID":"1cb05e83-a753-4f24-b578-d7b8996d39b7","Type":"ContainerDied","Data":"65e50548c2a68ea8d377a853fa87803a8ba573510bcf5bbc4b25f130ca9f0d8b"} Mar 13 13:12:08.509203 master-0 kubenswrapper[28149]: I0313 13:12:08.508323 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 13 13:12:08.617300 master-0 kubenswrapper[28149]: I0313 13:12:08.616312 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-768869957b-ffkcl"] Mar 13 13:12:08.617300 master-0 kubenswrapper[28149]: I0313 13:12:08.616760 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-768869957b-ffkcl" podUID="c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" containerName="neutron-api" containerID="cri-o://12eeb73a39f53d8e14eda8ec5a01ccf4ca5f504668906bb2b70963fdeddd747e" gracePeriod=30 Mar 13 13:12:08.617691 master-0 kubenswrapper[28149]: I0313 13:12:08.617493 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-768869957b-ffkcl" podUID="c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" containerName="neutron-httpd" containerID="cri-o://656b9643de464c1cf24dc958794267f1f16c7713d8cd39047d8a4b7430c00e0f" gracePeriod=30 Mar 13 13:12:08.632395 master-0 kubenswrapper[28149]: I0313 13:12:08.630687 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/648b68a5-c28c-4322-893b-c1ac80172c6f-openstack-config-secret\") pod \"openstackclient\" (UID: \"648b68a5-c28c-4322-893b-c1ac80172c6f\") " pod="openstack/openstackclient" Mar 13 13:12:08.632395 master-0 kubenswrapper[28149]: I0313 13:12:08.630783 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/648b68a5-c28c-4322-893b-c1ac80172c6f-openstack-config\") pod \"openstackclient\" (UID: \"648b68a5-c28c-4322-893b-c1ac80172c6f\") " pod="openstack/openstackclient" Mar 13 13:12:08.632395 master-0 kubenswrapper[28149]: I0313 13:12:08.630865 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lxfb\" (UniqueName: \"kubernetes.io/projected/648b68a5-c28c-4322-893b-c1ac80172c6f-kube-api-access-6lxfb\") pod \"openstackclient\" (UID: \"648b68a5-c28c-4322-893b-c1ac80172c6f\") " pod="openstack/openstackclient" Mar 13 13:12:08.632395 master-0 kubenswrapper[28149]: I0313 13:12:08.631015 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648b68a5-c28c-4322-893b-c1ac80172c6f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"648b68a5-c28c-4322-893b-c1ac80172c6f\") " pod="openstack/openstackclient" Mar 13 13:12:08.735188 master-0 kubenswrapper[28149]: I0313 13:12:08.733631 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lxfb\" (UniqueName: \"kubernetes.io/projected/648b68a5-c28c-4322-893b-c1ac80172c6f-kube-api-access-6lxfb\") pod \"openstackclient\" (UID: \"648b68a5-c28c-4322-893b-c1ac80172c6f\") " pod="openstack/openstackclient" Mar 13 13:12:08.735188 master-0 kubenswrapper[28149]: I0313 13:12:08.733788 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648b68a5-c28c-4322-893b-c1ac80172c6f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"648b68a5-c28c-4322-893b-c1ac80172c6f\") " pod="openstack/openstackclient" Mar 13 13:12:08.735188 master-0 kubenswrapper[28149]: I0313 13:12:08.733934 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/648b68a5-c28c-4322-893b-c1ac80172c6f-openstack-config-secret\") pod \"openstackclient\" (UID: \"648b68a5-c28c-4322-893b-c1ac80172c6f\") " pod="openstack/openstackclient" Mar 13 13:12:08.735188 master-0 kubenswrapper[28149]: I0313 13:12:08.733975 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/648b68a5-c28c-4322-893b-c1ac80172c6f-openstack-config\") pod \"openstackclient\" (UID: \"648b68a5-c28c-4322-893b-c1ac80172c6f\") " pod="openstack/openstackclient" Mar 13 13:12:08.735188 master-0 kubenswrapper[28149]: I0313 13:12:08.734907 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/648b68a5-c28c-4322-893b-c1ac80172c6f-openstack-config\") pod \"openstackclient\" (UID: \"648b68a5-c28c-4322-893b-c1ac80172c6f\") " pod="openstack/openstackclient" Mar 13 13:12:08.739079 master-0 kubenswrapper[28149]: I0313 13:12:08.739021 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/648b68a5-c28c-4322-893b-c1ac80172c6f-openstack-config-secret\") pod \"openstackclient\" (UID: \"648b68a5-c28c-4322-893b-c1ac80172c6f\") " pod="openstack/openstackclient" Mar 13 13:12:08.746785 master-0 kubenswrapper[28149]: I0313 13:12:08.746727 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648b68a5-c28c-4322-893b-c1ac80172c6f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"648b68a5-c28c-4322-893b-c1ac80172c6f\") " pod="openstack/openstackclient" Mar 13 13:12:08.754362 master-0 kubenswrapper[28149]: I0313 13:12:08.754291 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lxfb\" (UniqueName: \"kubernetes.io/projected/648b68a5-c28c-4322-893b-c1ac80172c6f-kube-api-access-6lxfb\") pod \"openstackclient\" (UID: \"648b68a5-c28c-4322-893b-c1ac80172c6f\") " pod="openstack/openstackclient" Mar 13 13:12:08.871884 master-0 kubenswrapper[28149]: I0313 13:12:08.871746 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 13:12:09.384082 master-0 kubenswrapper[28149]: I0313 13:12:09.373441 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-699d7776-9kkdk" Mar 13 13:12:09.432324 master-0 kubenswrapper[28149]: I0313 13:12:09.432067 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:12:09.736804 master-0 kubenswrapper[28149]: I0313 13:12:09.722240 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea4701c8-792f-4a27-948e-cc2d36ad5739-logs\") pod \"ea4701c8-792f-4a27-948e-cc2d36ad5739\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " Mar 13 13:12:09.778682 master-0 kubenswrapper[28149]: I0313 13:12:09.742860 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-internal-tls-certs\") pod \"ea4701c8-792f-4a27-948e-cc2d36ad5739\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " Mar 13 13:12:09.778682 master-0 kubenswrapper[28149]: I0313 13:12:09.743102 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zpst\" (UniqueName: \"kubernetes.io/projected/ea4701c8-792f-4a27-948e-cc2d36ad5739-kube-api-access-6zpst\") pod \"ea4701c8-792f-4a27-948e-cc2d36ad5739\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " Mar 13 13:12:09.778682 master-0 kubenswrapper[28149]: I0313 13:12:09.743606 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p64x9\" (UniqueName: \"kubernetes.io/projected/1cb05e83-a753-4f24-b578-d7b8996d39b7-kube-api-access-p64x9\") pod \"1cb05e83-a753-4f24-b578-d7b8996d39b7\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " Mar 13 13:12:09.778682 master-0 kubenswrapper[28149]: I0313 13:12:09.743687 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-public-tls-certs\") pod \"ea4701c8-792f-4a27-948e-cc2d36ad5739\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " Mar 13 13:12:09.778682 master-0 kubenswrapper[28149]: I0313 13:12:09.743728 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-config\") pod \"1cb05e83-a753-4f24-b578-d7b8996d39b7\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " Mar 13 13:12:09.778682 master-0 kubenswrapper[28149]: I0313 13:12:09.743759 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-dns-svc\") pod \"1cb05e83-a753-4f24-b578-d7b8996d39b7\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " Mar 13 13:12:09.778682 master-0 kubenswrapper[28149]: I0313 13:12:09.743861 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-ovsdbserver-sb\") pod \"1cb05e83-a753-4f24-b578-d7b8996d39b7\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " Mar 13 13:12:09.778682 master-0 kubenswrapper[28149]: I0313 13:12:09.743895 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-config-data\") pod \"ea4701c8-792f-4a27-948e-cc2d36ad5739\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " Mar 13 13:12:09.778682 master-0 kubenswrapper[28149]: I0313 13:12:09.743919 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-ovsdbserver-nb\") pod \"1cb05e83-a753-4f24-b578-d7b8996d39b7\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " Mar 13 13:12:09.778682 master-0 kubenswrapper[28149]: I0313 13:12:09.743966 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-dns-swift-storage-0\") pod \"1cb05e83-a753-4f24-b578-d7b8996d39b7\" (UID: \"1cb05e83-a753-4f24-b578-d7b8996d39b7\") " Mar 13 13:12:09.778682 master-0 kubenswrapper[28149]: I0313 13:12:09.744021 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-combined-ca-bundle\") pod \"ea4701c8-792f-4a27-948e-cc2d36ad5739\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " Mar 13 13:12:09.778682 master-0 kubenswrapper[28149]: I0313 13:12:09.744050 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-scripts\") pod \"ea4701c8-792f-4a27-948e-cc2d36ad5739\" (UID: \"ea4701c8-792f-4a27-948e-cc2d36ad5739\") " Mar 13 13:12:09.778682 master-0 kubenswrapper[28149]: I0313 13:12:09.751330 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea4701c8-792f-4a27-948e-cc2d36ad5739-logs" (OuterVolumeSpecName: "logs") pod "ea4701c8-792f-4a27-948e-cc2d36ad5739" (UID: "ea4701c8-792f-4a27-948e-cc2d36ad5739"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:12:09.818158 master-0 kubenswrapper[28149]: I0313 13:12:09.818053 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb05e83-a753-4f24-b578-d7b8996d39b7-kube-api-access-p64x9" (OuterVolumeSpecName: "kube-api-access-p64x9") pod "1cb05e83-a753-4f24-b578-d7b8996d39b7" (UID: "1cb05e83-a753-4f24-b578-d7b8996d39b7"). InnerVolumeSpecName "kube-api-access-p64x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:12:09.836054 master-0 kubenswrapper[28149]: I0313 13:12:09.820506 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-scripts" (OuterVolumeSpecName: "scripts") pod "ea4701c8-792f-4a27-948e-cc2d36ad5739" (UID: "ea4701c8-792f-4a27-948e-cc2d36ad5739"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:09.836054 master-0 kubenswrapper[28149]: I0313 13:12:09.820526 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea4701c8-792f-4a27-948e-cc2d36ad5739-kube-api-access-6zpst" (OuterVolumeSpecName: "kube-api-access-6zpst") pod "ea4701c8-792f-4a27-948e-cc2d36ad5739" (UID: "ea4701c8-792f-4a27-948e-cc2d36ad5739"). InnerVolumeSpecName "kube-api-access-6zpst". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:12:09.865458 master-0 kubenswrapper[28149]: I0313 13:12:09.862373 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p64x9\" (UniqueName: \"kubernetes.io/projected/1cb05e83-a753-4f24-b578-d7b8996d39b7-kube-api-access-p64x9\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:09.867616 master-0 kubenswrapper[28149]: I0313 13:12:09.866780 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:09.867616 master-0 kubenswrapper[28149]: I0313 13:12:09.866848 28149 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea4701c8-792f-4a27-948e-cc2d36ad5739-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:09.867616 master-0 kubenswrapper[28149]: I0313 13:12:09.866865 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zpst\" (UniqueName: \"kubernetes.io/projected/ea4701c8-792f-4a27-948e-cc2d36ad5739-kube-api-access-6zpst\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:09.882300 master-0 kubenswrapper[28149]: I0313 13:12:09.881580 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f9957b47c-swh76" event={"ID":"1cb05e83-a753-4f24-b578-d7b8996d39b7","Type":"ContainerDied","Data":"2df5670ee4f92554c68eaff9789f5bde338b7233a3c34e4948d3e41245ce2405"} Mar 13 13:12:09.882300 master-0 kubenswrapper[28149]: I0313 13:12:09.881638 28149 scope.go:117] "RemoveContainer" containerID="65e50548c2a68ea8d377a853fa87803a8ba573510bcf5bbc4b25f130ca9f0d8b" Mar 13 13:12:09.882300 master-0 kubenswrapper[28149]: I0313 13:12:09.881825 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f9957b47c-swh76" Mar 13 13:12:09.908009 master-0 kubenswrapper[28149]: I0313 13:12:09.907839 28149 generic.go:334] "Generic (PLEG): container finished" podID="c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" containerID="656b9643de464c1cf24dc958794267f1f16c7713d8cd39047d8a4b7430c00e0f" exitCode=0 Mar 13 13:12:09.908009 master-0 kubenswrapper[28149]: I0313 13:12:09.907962 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-768869957b-ffkcl" event={"ID":"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f","Type":"ContainerDied","Data":"656b9643de464c1cf24dc958794267f1f16c7713d8cd39047d8a4b7430c00e0f"} Mar 13 13:12:09.938550 master-0 kubenswrapper[28149]: I0313 13:12:09.937377 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-699d7776-9kkdk" event={"ID":"ea4701c8-792f-4a27-948e-cc2d36ad5739","Type":"ContainerDied","Data":"07fe23b1b5b47b55f113f422e7f7413d0f49d322ce63394c42771441224379f3"} Mar 13 13:12:09.938550 master-0 kubenswrapper[28149]: I0313 13:12:09.937489 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-699d7776-9kkdk" Mar 13 13:12:10.345652 master-0 kubenswrapper[28149]: I0313 13:12:10.345197 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-58d4ff778c-wbbt4"] Mar 13 13:12:10.363522 master-0 kubenswrapper[28149]: I0313 13:12:10.357733 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-volume-lvm-iscsi-0"] Mar 13 13:12:10.404674 master-0 kubenswrapper[28149]: I0313 13:12:10.397784 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1cb05e83-a753-4f24-b578-d7b8996d39b7" (UID: "1cb05e83-a753-4f24-b578-d7b8996d39b7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:12:10.404674 master-0 kubenswrapper[28149]: I0313 13:12:10.398518 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1cb05e83-a753-4f24-b578-d7b8996d39b7" (UID: "1cb05e83-a753-4f24-b578-d7b8996d39b7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:12:10.404674 master-0 kubenswrapper[28149]: I0313 13:12:10.402489 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1cb05e83-a753-4f24-b578-d7b8996d39b7" (UID: "1cb05e83-a753-4f24-b578-d7b8996d39b7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:12:10.429168 master-0 kubenswrapper[28149]: I0313 13:12:10.428939 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1cb05e83-a753-4f24-b578-d7b8996d39b7" (UID: "1cb05e83-a753-4f24-b578-d7b8996d39b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:12:10.459766 master-0 kubenswrapper[28149]: I0313 13:12:10.450102 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:10.459766 master-0 kubenswrapper[28149]: I0313 13:12:10.450153 28149 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:10.459766 master-0 kubenswrapper[28149]: I0313 13:12:10.450174 28149 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:10.459766 master-0 kubenswrapper[28149]: I0313 13:12:10.450183 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:10.552738 master-0 kubenswrapper[28149]: I0313 13:12:10.537646 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea4701c8-792f-4a27-948e-cc2d36ad5739" (UID: "ea4701c8-792f-4a27-948e-cc2d36ad5739"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:10.552738 master-0 kubenswrapper[28149]: I0313 13:12:10.552634 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-config" (OuterVolumeSpecName: "config") pod "1cb05e83-a753-4f24-b578-d7b8996d39b7" (UID: "1cb05e83-a753-4f24-b578-d7b8996d39b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:12:10.555345 master-0 kubenswrapper[28149]: I0313 13:12:10.554833 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb05e83-a753-4f24-b578-d7b8996d39b7-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:10.555345 master-0 kubenswrapper[28149]: I0313 13:12:10.554876 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:10.577571 master-0 kubenswrapper[28149]: I0313 13:12:10.576561 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ea4701c8-792f-4a27-948e-cc2d36ad5739" (UID: "ea4701c8-792f-4a27-948e-cc2d36ad5739"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:10.605541 master-0 kubenswrapper[28149]: I0313 13:12:10.604337 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-config-data" (OuterVolumeSpecName: "config-data") pod "ea4701c8-792f-4a27-948e-cc2d36ad5739" (UID: "ea4701c8-792f-4a27-948e-cc2d36ad5739"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:10.627101 master-0 kubenswrapper[28149]: I0313 13:12:10.622315 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-ee0a2-api-0" Mar 13 13:12:10.630982 master-0 kubenswrapper[28149]: I0313 13:12:10.629755 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ea4701c8-792f-4a27-948e-cc2d36ad5739" (UID: "ea4701c8-792f-4a27-948e-cc2d36ad5739"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:10.671349 master-0 kubenswrapper[28149]: I0313 13:12:10.667947 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:10.671349 master-0 kubenswrapper[28149]: I0313 13:12:10.668441 28149 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:10.671349 master-0 kubenswrapper[28149]: I0313 13:12:10.668454 28149 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4701c8-792f-4a27-948e-cc2d36ad5739-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:10.750474 master-0 kubenswrapper[28149]: I0313 13:12:10.744494 28149 scope.go:117] "RemoveContainer" containerID="20262da21e1a31039d8959e8d536a7f86da3b9256937f0a84f87e44540289318" Mar 13 13:12:10.872178 master-0 kubenswrapper[28149]: I0313 13:12:10.864279 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f9957b47c-swh76"] Mar 13 13:12:10.908291 master-0 kubenswrapper[28149]: I0313 13:12:10.907527 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f9957b47c-swh76"] Mar 13 13:12:10.917158 master-0 kubenswrapper[28149]: I0313 13:12:10.915122 28149 scope.go:117] "RemoveContainer" containerID="0de68570e0c25f556b25a0d514a40bf6e6b23fdd944c31b42c9a5dee0c0f377f" Mar 13 13:12:11.008216 master-0 kubenswrapper[28149]: I0313 13:12:11.008052 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-58d4ff778c-wbbt4" event={"ID":"e28006c9-0e25-4845-abae-e6407165a9dc","Type":"ContainerStarted","Data":"1bba460f5ec9cf553744eb866df8b0b3c0206005b4c5be25cfe5d3655747aa0f"} Mar 13 13:12:11.010277 master-0 kubenswrapper[28149]: I0313 13:12:11.010048 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" event={"ID":"b0b0e08b-0a29-40e5-9cd6-3609aa630650","Type":"ContainerStarted","Data":"2873978b2255cffca98740bc83c22edb51524d1cd2d53e3541ca091390a2b6ca"} Mar 13 13:12:11.037358 master-0 kubenswrapper[28149]: I0313 13:12:11.037298 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 13 13:12:11.040639 master-0 kubenswrapper[28149]: I0313 13:12:11.038848 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" event={"ID":"52f4f9dd-4956-4c8b-9a8d-c832a8049c3a","Type":"ContainerStarted","Data":"716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb"} Mar 13 13:12:11.040639 master-0 kubenswrapper[28149]: I0313 13:12:11.039245 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:12:11.058820 master-0 kubenswrapper[28149]: I0313 13:12:11.048253 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee0a2-backup-0"] Mar 13 13:12:11.213639 master-0 kubenswrapper[28149]: I0313 13:12:11.211915 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" podStartSLOduration=6.936640131 podStartE2EDuration="18.210269127s" podCreationTimestamp="2026-03-13 13:11:53 +0000 UTC" firstStartedPulling="2026-03-13 13:11:58.164106775 +0000 UTC m=+1091.817571934" lastFinishedPulling="2026-03-13 13:12:09.437735771 +0000 UTC m=+1103.091200930" observedRunningTime="2026-03-13 13:12:11.076378501 +0000 UTC m=+1104.729843660" watchObservedRunningTime="2026-03-13 13:12:11.210269127 +0000 UTC m=+1104.863734296" Mar 13 13:12:11.243931 master-0 kubenswrapper[28149]: I0313 13:12:11.243887 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 13 13:12:11.252285 master-0 kubenswrapper[28149]: I0313 13:12:11.250671 28149 scope.go:117] "RemoveContainer" containerID="d83b312654b71814018bf82aefcc44782c1b8a50ca051dac7c42951c264b572f" Mar 13 13:12:11.261688 master-0 kubenswrapper[28149]: W0313 13:12:11.261626 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fdaa161_cf3d_465a_8e70_c2af73f96711.slice/crio-d3aa4b293003e67b261fcac93c9a82e900ae8b5fdcdd4bb6b00f25a2ed324c3b WatchSource:0}: Error finding container d3aa4b293003e67b261fcac93c9a82e900ae8b5fdcdd4bb6b00f25a2ed324c3b: Status 404 returned error can't find the container with id d3aa4b293003e67b261fcac93c9a82e900ae8b5fdcdd4bb6b00f25a2ed324c3b Mar 13 13:12:11.288666 master-0 kubenswrapper[28149]: I0313 13:12:11.288592 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-699d7776-9kkdk"] Mar 13 13:12:11.358161 master-0 kubenswrapper[28149]: I0313 13:12:11.358089 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-699d7776-9kkdk"] Mar 13 13:12:12.126166 master-0 kubenswrapper[28149]: I0313 13:12:12.126077 28149 generic.go:334] "Generic (PLEG): container finished" podID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerID="cb2318dc776758162d16efe58599c50c0569785482ddae99d3e78b7fa7cc0b56" exitCode=0 Mar 13 13:12:12.126567 master-0 kubenswrapper[28149]: I0313 13:12:12.126278 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cf5cb77b5-nrxbr" event={"ID":"d6997886-21ba-4767-a3f9-82bb99c7c39a","Type":"ContainerDied","Data":"cb2318dc776758162d16efe58599c50c0569785482ddae99d3e78b7fa7cc0b56"} Mar 13 13:12:12.131652 master-0 kubenswrapper[28149]: I0313 13:12:12.131610 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-scheduler-0" event={"ID":"db83bac9-e722-4e4f-aad6-eba4fdcbaedb","Type":"ContainerStarted","Data":"7883679deb8d56dfd0045628b8fbca1973b34cc56b826c280b82b58d9429dd08"} Mar 13 13:12:12.134453 master-0 kubenswrapper[28149]: I0313 13:12:12.134421 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8fdaa161-cf3d-465a-8e70-c2af73f96711","Type":"ContainerStarted","Data":"d3aa4b293003e67b261fcac93c9a82e900ae8b5fdcdd4bb6b00f25a2ed324c3b"} Mar 13 13:12:12.141153 master-0 kubenswrapper[28149]: I0313 13:12:12.141082 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-58d4ff778c-wbbt4" event={"ID":"e28006c9-0e25-4845-abae-e6407165a9dc","Type":"ContainerStarted","Data":"ee63556a0221ea6b60d1dec9bad15576f9452bac996beeccbe725cf1bced3956"} Mar 13 13:12:12.146941 master-0 kubenswrapper[28149]: I0313 13:12:12.146903 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"648b68a5-c28c-4322-893b-c1ac80172c6f","Type":"ContainerStarted","Data":"974d8a9be36f832b4e4455167d0d3b2c4c3af26da93bf73e89a90fbb347e4a01"} Mar 13 13:12:12.159771 master-0 kubenswrapper[28149]: I0313 13:12:12.159346 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" event={"ID":"b0b0e08b-0a29-40e5-9cd6-3609aa630650","Type":"ContainerStarted","Data":"91d7958cd558947da173d455a27b77bcc394915e33827c400adbe395882d44af"} Mar 13 13:12:12.159771 master-0 kubenswrapper[28149]: I0313 13:12:12.159405 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" event={"ID":"b0b0e08b-0a29-40e5-9cd6-3609aa630650","Type":"ContainerStarted","Data":"d22787438d65a65ef5033d9a9812c54f45ad3733d7bba51adda55258e12c198a"} Mar 13 13:12:12.197627 master-0 kubenswrapper[28149]: I0313 13:12:12.197568 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-backup-0" event={"ID":"aeef3e73-d29d-456c-a41a-8f478df6e975","Type":"ContainerStarted","Data":"4043b06da1139f98d4d834c73c395e0e61d319e057b10f6860aeaf469340329d"} Mar 13 13:12:12.197627 master-0 kubenswrapper[28149]: I0313 13:12:12.197628 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-backup-0" event={"ID":"aeef3e73-d29d-456c-a41a-8f478df6e975","Type":"ContainerStarted","Data":"cfb398b29e0db38e574609ca0db19d2aa442822b8bfe501e61e112b902c4a898"} Mar 13 13:12:12.228370 master-0 kubenswrapper[28149]: I0313 13:12:12.228279 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ee0a2-scheduler-0" podStartSLOduration=13.228259587 podStartE2EDuration="13.228259587s" podCreationTimestamp="2026-03-13 13:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:12:12.220054396 +0000 UTC m=+1105.873519555" watchObservedRunningTime="2026-03-13 13:12:12.228259587 +0000 UTC m=+1105.881724746" Mar 13 13:12:12.253673 master-0 kubenswrapper[28149]: I0313 13:12:12.253601 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" podStartSLOduration=8.253581419 podStartE2EDuration="8.253581419s" podCreationTimestamp="2026-03-13 13:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:12:12.252803119 +0000 UTC m=+1105.906268278" watchObservedRunningTime="2026-03-13 13:12:12.253581419 +0000 UTC m=+1105.907046588" Mar 13 13:12:12.529175 master-0 kubenswrapper[28149]: E0313 13:12:12.526715 28149 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6997886_21ba_4767_a3f9_82bb99c7c39a.slice/crio-conmon-cb2318dc776758162d16efe58599c50c0569785482ddae99d3e78b7fa7cc0b56.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode28006c9_0e25_4845_abae_e6407165a9dc.slice/crio-conmon-ee63556a0221ea6b60d1dec9bad15576f9452bac996beeccbe725cf1bced3956.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode28006c9_0e25_4845_abae_e6407165a9dc.slice/crio-ee63556a0221ea6b60d1dec9bad15576f9452bac996beeccbe725cf1bced3956.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6997886_21ba_4767_a3f9_82bb99c7c39a.slice/crio-cb2318dc776758162d16efe58599c50c0569785482ddae99d3e78b7fa7cc0b56.scope\": RecentStats: unable to find data in memory cache]" Mar 13 13:12:12.708231 master-0 kubenswrapper[28149]: I0313 13:12:12.707544 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb05e83-a753-4f24-b578-d7b8996d39b7" path="/var/lib/kubelet/pods/1cb05e83-a753-4f24-b578-d7b8996d39b7/volumes" Mar 13 13:12:12.712180 master-0 kubenswrapper[28149]: I0313 13:12:12.708566 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea4701c8-792f-4a27-948e-cc2d36ad5739" path="/var/lib/kubelet/pods/ea4701c8-792f-4a27-948e-cc2d36ad5739/volumes" Mar 13 13:12:13.474015 master-0 kubenswrapper[28149]: I0313 13:12:13.473906 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee0a2-backup-0" event={"ID":"aeef3e73-d29d-456c-a41a-8f478df6e975","Type":"ContainerStarted","Data":"88a973ec8fc7f106ceb64ae6d9ed4e8bdf8c7d2426e85dbcb16a3506fbbf9802"} Mar 13 13:12:13.485429 master-0 kubenswrapper[28149]: I0313 13:12:13.485388 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cf5cb77b5-nrxbr" event={"ID":"d6997886-21ba-4767-a3f9-82bb99c7c39a","Type":"ContainerStarted","Data":"da4edd3d9461af138752ec2a89ea72ea8d44f9669fe84abb321900c8f9ecf741"} Mar 13 13:12:13.505085 master-0 kubenswrapper[28149]: I0313 13:12:13.500219 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8fdaa161-cf3d-465a-8e70-c2af73f96711","Type":"ContainerStarted","Data":"decc068dda1f42fd52e6cef77d423cd57612ca5bec7fd1129637d4e338c2dea4"} Mar 13 13:12:13.515067 master-0 kubenswrapper[28149]: I0313 13:12:13.514995 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ee0a2-backup-0" podStartSLOduration=8.514971496 podStartE2EDuration="8.514971496s" podCreationTimestamp="2026-03-13 13:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:12:13.507890085 +0000 UTC m=+1107.161355244" watchObservedRunningTime="2026-03-13 13:12:13.514971496 +0000 UTC m=+1107.168436655" Mar 13 13:12:13.536351 master-0 kubenswrapper[28149]: I0313 13:12:13.535482 28149 generic.go:334] "Generic (PLEG): container finished" podID="e28006c9-0e25-4845-abae-e6407165a9dc" containerID="ee63556a0221ea6b60d1dec9bad15576f9452bac996beeccbe725cf1bced3956" exitCode=0 Mar 13 13:12:13.536351 master-0 kubenswrapper[28149]: I0313 13:12:13.536259 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-58d4ff778c-wbbt4" event={"ID":"e28006c9-0e25-4845-abae-e6407165a9dc","Type":"ContainerDied","Data":"ee63556a0221ea6b60d1dec9bad15576f9452bac996beeccbe725cf1bced3956"} Mar 13 13:12:13.536351 master-0 kubenswrapper[28149]: I0313 13:12:13.536333 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-58d4ff778c-wbbt4" event={"ID":"e28006c9-0e25-4845-abae-e6407165a9dc","Type":"ContainerStarted","Data":"f92d25bdf9a29d095bf3c165a6bd48b82a4a57d79fd4c1aebcea7157d1d33129"} Mar 13 13:12:14.340314 master-0 kubenswrapper[28149]: I0313 13:12:14.340073 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5b4d4977b6-gw8dl"] Mar 13 13:12:14.340799 master-0 kubenswrapper[28149]: E0313 13:12:14.340771 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb05e83-a753-4f24-b578-d7b8996d39b7" containerName="dnsmasq-dns" Mar 13 13:12:14.340799 master-0 kubenswrapper[28149]: I0313 13:12:14.340793 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb05e83-a753-4f24-b578-d7b8996d39b7" containerName="dnsmasq-dns" Mar 13 13:12:14.340949 master-0 kubenswrapper[28149]: E0313 13:12:14.340816 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerName="placement-api" Mar 13 13:12:14.340949 master-0 kubenswrapper[28149]: I0313 13:12:14.340823 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerName="placement-api" Mar 13 13:12:14.340949 master-0 kubenswrapper[28149]: E0313 13:12:14.340845 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerName="placement-log" Mar 13 13:12:14.340949 master-0 kubenswrapper[28149]: I0313 13:12:14.340852 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerName="placement-log" Mar 13 13:12:14.340949 master-0 kubenswrapper[28149]: E0313 13:12:14.340932 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb05e83-a753-4f24-b578-d7b8996d39b7" containerName="init" Mar 13 13:12:14.340949 master-0 kubenswrapper[28149]: I0313 13:12:14.340944 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb05e83-a753-4f24-b578-d7b8996d39b7" containerName="init" Mar 13 13:12:14.341586 master-0 kubenswrapper[28149]: I0313 13:12:14.341480 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerName="placement-api" Mar 13 13:12:14.341586 master-0 kubenswrapper[28149]: I0313 13:12:14.341518 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb05e83-a753-4f24-b578-d7b8996d39b7" containerName="dnsmasq-dns" Mar 13 13:12:14.341586 master-0 kubenswrapper[28149]: I0313 13:12:14.341551 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4701c8-792f-4a27-948e-cc2d36ad5739" containerName="placement-log" Mar 13 13:12:14.342961 master-0 kubenswrapper[28149]: I0313 13:12:14.342921 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.355396 master-0 kubenswrapper[28149]: I0313 13:12:14.354343 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 13 13:12:14.355396 master-0 kubenswrapper[28149]: I0313 13:12:14.354686 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 13 13:12:14.356299 master-0 kubenswrapper[28149]: I0313 13:12:14.356261 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 13 13:12:14.388744 master-0 kubenswrapper[28149]: I0313 13:12:14.375047 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5b4d4977b6-gw8dl"] Mar 13 13:12:14.414063 master-0 kubenswrapper[28149]: I0313 13:12:14.413741 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea9a069c-650d-43a7-be96-75412ae3b7c9-internal-tls-certs\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.414063 master-0 kubenswrapper[28149]: I0313 13:12:14.413847 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea9a069c-650d-43a7-be96-75412ae3b7c9-run-httpd\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.414063 master-0 kubenswrapper[28149]: I0313 13:12:14.413901 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea9a069c-650d-43a7-be96-75412ae3b7c9-config-data\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.414063 master-0 kubenswrapper[28149]: I0313 13:12:14.413974 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea9a069c-650d-43a7-be96-75412ae3b7c9-log-httpd\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.414063 master-0 kubenswrapper[28149]: I0313 13:12:14.414014 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea9a069c-650d-43a7-be96-75412ae3b7c9-public-tls-certs\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.414063 master-0 kubenswrapper[28149]: I0313 13:12:14.414046 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea9a069c-650d-43a7-be96-75412ae3b7c9-etc-swift\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.414063 master-0 kubenswrapper[28149]: I0313 13:12:14.414073 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt6wc\" (UniqueName: \"kubernetes.io/projected/ea9a069c-650d-43a7-be96-75412ae3b7c9-kube-api-access-tt6wc\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.414645 master-0 kubenswrapper[28149]: I0313 13:12:14.414163 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea9a069c-650d-43a7-be96-75412ae3b7c9-combined-ca-bundle\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.524280 master-0 kubenswrapper[28149]: I0313 13:12:14.524093 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea9a069c-650d-43a7-be96-75412ae3b7c9-log-httpd\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.524911 master-0 kubenswrapper[28149]: I0313 13:12:14.524309 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea9a069c-650d-43a7-be96-75412ae3b7c9-public-tls-certs\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.524911 master-0 kubenswrapper[28149]: I0313 13:12:14.524346 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea9a069c-650d-43a7-be96-75412ae3b7c9-etc-swift\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.524911 master-0 kubenswrapper[28149]: I0313 13:12:14.524375 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt6wc\" (UniqueName: \"kubernetes.io/projected/ea9a069c-650d-43a7-be96-75412ae3b7c9-kube-api-access-tt6wc\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.524911 master-0 kubenswrapper[28149]: I0313 13:12:14.524435 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea9a069c-650d-43a7-be96-75412ae3b7c9-combined-ca-bundle\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.524911 master-0 kubenswrapper[28149]: I0313 13:12:14.524565 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea9a069c-650d-43a7-be96-75412ae3b7c9-internal-tls-certs\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.524911 master-0 kubenswrapper[28149]: I0313 13:12:14.524611 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea9a069c-650d-43a7-be96-75412ae3b7c9-run-httpd\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.524911 master-0 kubenswrapper[28149]: I0313 13:12:14.524661 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea9a069c-650d-43a7-be96-75412ae3b7c9-config-data\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.525911 master-0 kubenswrapper[28149]: I0313 13:12:14.525754 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea9a069c-650d-43a7-be96-75412ae3b7c9-log-httpd\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.532342 master-0 kubenswrapper[28149]: I0313 13:12:14.532301 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea9a069c-650d-43a7-be96-75412ae3b7c9-run-httpd\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.532707 master-0 kubenswrapper[28149]: I0313 13:12:14.532675 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea9a069c-650d-43a7-be96-75412ae3b7c9-public-tls-certs\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.536127 master-0 kubenswrapper[28149]: I0313 13:12:14.535072 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea9a069c-650d-43a7-be96-75412ae3b7c9-config-data\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.537342 master-0 kubenswrapper[28149]: I0313 13:12:14.537233 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea9a069c-650d-43a7-be96-75412ae3b7c9-internal-tls-certs\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.538409 master-0 kubenswrapper[28149]: I0313 13:12:14.538300 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea9a069c-650d-43a7-be96-75412ae3b7c9-combined-ca-bundle\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.544216 master-0 kubenswrapper[28149]: I0313 13:12:14.543762 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea9a069c-650d-43a7-be96-75412ae3b7c9-etc-swift\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.547302 master-0 kubenswrapper[28149]: I0313 13:12:14.546386 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt6wc\" (UniqueName: \"kubernetes.io/projected/ea9a069c-650d-43a7-be96-75412ae3b7c9-kube-api-access-tt6wc\") pod \"swift-proxy-5b4d4977b6-gw8dl\" (UID: \"ea9a069c-650d-43a7-be96-75412ae3b7c9\") " pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:14.587535 master-0 kubenswrapper[28149]: I0313 13:12:14.587469 28149 generic.go:334] "Generic (PLEG): container finished" podID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerID="8da343858979eb8052acf2f30ee8ba929161f64681fe9cc2c47a919345ce841e" exitCode=1 Mar 13 13:12:14.588055 master-0 kubenswrapper[28149]: I0313 13:12:14.587568 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cf5cb77b5-nrxbr" event={"ID":"d6997886-21ba-4767-a3f9-82bb99c7c39a","Type":"ContainerDied","Data":"8da343858979eb8052acf2f30ee8ba929161f64681fe9cc2c47a919345ce841e"} Mar 13 13:12:14.595984 master-0 kubenswrapper[28149]: I0313 13:12:14.595847 28149 scope.go:117] "RemoveContainer" containerID="8da343858979eb8052acf2f30ee8ba929161f64681fe9cc2c47a919345ce841e" Mar 13 13:12:14.618409 master-0 kubenswrapper[28149]: I0313 13:12:14.618330 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-58d4ff778c-wbbt4" event={"ID":"e28006c9-0e25-4845-abae-e6407165a9dc","Type":"ContainerStarted","Data":"63337953157f794846bcf0fe16e466402fcedbb37163d9c6e33442ad76087e7d"} Mar 13 13:12:14.622440 master-0 kubenswrapper[28149]: I0313 13:12:14.622410 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:14.745838 master-0 kubenswrapper[28149]: I0313 13:12:14.745771 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:15.448792 master-0 kubenswrapper[28149]: I0313 13:12:15.445824 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-58d4ff778c-wbbt4" podStartSLOduration=11.445805935 podStartE2EDuration="11.445805935s" podCreationTimestamp="2026-03-13 13:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:12:14.687449749 +0000 UTC m=+1108.340914908" watchObservedRunningTime="2026-03-13 13:12:15.445805935 +0000 UTC m=+1109.099271094" Mar 13 13:12:15.459023 master-0 kubenswrapper[28149]: I0313 13:12:15.458954 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5b4d4977b6-gw8dl"] Mar 13 13:12:15.485998 master-0 kubenswrapper[28149]: I0313 13:12:15.484315 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:15.670255 master-0 kubenswrapper[28149]: I0313 13:12:15.662938 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b4d4977b6-gw8dl" event={"ID":"ea9a069c-650d-43a7-be96-75412ae3b7c9","Type":"ContainerStarted","Data":"ce5b8cbe21c958203c786d5c938373ef677434ff20cd604408f1625fb44a42a9"} Mar 13 13:12:15.684359 master-0 kubenswrapper[28149]: I0313 13:12:15.683890 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cf5cb77b5-nrxbr" event={"ID":"d6997886-21ba-4767-a3f9-82bb99c7c39a","Type":"ContainerStarted","Data":"9da47da0c1d68defafda498c9391044c1b2f679960a3bb4597aeef1033c4cce3"} Mar 13 13:12:15.684359 master-0 kubenswrapper[28149]: I0313 13:12:15.683992 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:12:15.737591 master-0 kubenswrapper[28149]: I0313 13:12:15.734259 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-6cf5cb77b5-nrxbr" podStartSLOduration=10.474135397 podStartE2EDuration="20.734231744s" podCreationTimestamp="2026-03-13 13:11:55 +0000 UTC" firstStartedPulling="2026-03-13 13:11:59.00557437 +0000 UTC m=+1092.659039529" lastFinishedPulling="2026-03-13 13:12:09.265670717 +0000 UTC m=+1102.919135876" observedRunningTime="2026-03-13 13:12:15.712393587 +0000 UTC m=+1109.365858746" watchObservedRunningTime="2026-03-13 13:12:15.734231744 +0000 UTC m=+1109.387696903" Mar 13 13:12:15.800762 master-0 kubenswrapper[28149]: I0313 13:12:15.800693 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:12:15.930569 master-0 kubenswrapper[28149]: I0313 13:12:15.924284 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:16.049300 master-0 kubenswrapper[28149]: I0313 13:12:16.048577 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:16.340313 master-0 kubenswrapper[28149]: I0313 13:12:16.322207 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-ee0a2-scheduler-0" Mar 13 13:12:16.601776 master-0 kubenswrapper[28149]: I0313 13:12:16.596455 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-ee0a2-backup-0" Mar 13 13:12:16.728290 master-0 kubenswrapper[28149]: I0313 13:12:16.727901 28149 generic.go:334] "Generic (PLEG): container finished" podID="c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" containerID="12eeb73a39f53d8e14eda8ec5a01ccf4ca5f504668906bb2b70963fdeddd747e" exitCode=0 Mar 13 13:12:16.742981 master-0 kubenswrapper[28149]: I0313 13:12:16.742906 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-768869957b-ffkcl" event={"ID":"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f","Type":"ContainerDied","Data":"12eeb73a39f53d8e14eda8ec5a01ccf4ca5f504668906bb2b70963fdeddd747e"} Mar 13 13:12:16.759246 master-0 kubenswrapper[28149]: I0313 13:12:16.752963 28149 generic.go:334] "Generic (PLEG): container finished" podID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerID="9da47da0c1d68defafda498c9391044c1b2f679960a3bb4597aeef1033c4cce3" exitCode=1 Mar 13 13:12:16.759246 master-0 kubenswrapper[28149]: I0313 13:12:16.753045 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cf5cb77b5-nrxbr" event={"ID":"d6997886-21ba-4767-a3f9-82bb99c7c39a","Type":"ContainerDied","Data":"9da47da0c1d68defafda498c9391044c1b2f679960a3bb4597aeef1033c4cce3"} Mar 13 13:12:16.759246 master-0 kubenswrapper[28149]: I0313 13:12:16.753119 28149 scope.go:117] "RemoveContainer" containerID="8da343858979eb8052acf2f30ee8ba929161f64681fe9cc2c47a919345ce841e" Mar 13 13:12:16.759246 master-0 kubenswrapper[28149]: I0313 13:12:16.753784 28149 scope.go:117] "RemoveContainer" containerID="9da47da0c1d68defafda498c9391044c1b2f679960a3bb4597aeef1033c4cce3" Mar 13 13:12:16.759246 master-0 kubenswrapper[28149]: E0313 13:12:16.754156 28149 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-6cf5cb77b5-nrxbr_openstack(d6997886-21ba-4767-a3f9-82bb99c7c39a)\"" pod="openstack/ironic-6cf5cb77b5-nrxbr" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" Mar 13 13:12:16.763046 master-0 kubenswrapper[28149]: I0313 13:12:16.762718 28149 generic.go:334] "Generic (PLEG): container finished" podID="8fdaa161-cf3d-465a-8e70-c2af73f96711" containerID="decc068dda1f42fd52e6cef77d423cd57612ca5bec7fd1129637d4e338c2dea4" exitCode=0 Mar 13 13:12:16.763046 master-0 kubenswrapper[28149]: I0313 13:12:16.762805 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8fdaa161-cf3d-465a-8e70-c2af73f96711","Type":"ContainerDied","Data":"decc068dda1f42fd52e6cef77d423cd57612ca5bec7fd1129637d4e338c2dea4"} Mar 13 13:12:16.772278 master-0 kubenswrapper[28149]: I0313 13:12:16.770265 28149 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 13:12:16.778263 master-0 kubenswrapper[28149]: I0313 13:12:16.778152 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b4d4977b6-gw8dl" event={"ID":"ea9a069c-650d-43a7-be96-75412ae3b7c9","Type":"ContainerStarted","Data":"875e3705e1b0df8294d2e493800e310aa659e53234c88310b300debf4e52e982"} Mar 13 13:12:16.778263 master-0 kubenswrapper[28149]: I0313 13:12:16.778211 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b4d4977b6-gw8dl" event={"ID":"ea9a069c-650d-43a7-be96-75412ae3b7c9","Type":"ContainerStarted","Data":"a277e67be1651aacf9c37ded044877075342e2b796b59b3da2ccf345dcf89352"} Mar 13 13:12:16.782179 master-0 kubenswrapper[28149]: I0313 13:12:16.782127 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:16.782701 master-0 kubenswrapper[28149]: I0313 13:12:16.782641 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:16.834939 master-0 kubenswrapper[28149]: I0313 13:12:16.834859 28149 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:12:16.867854 master-0 kubenswrapper[28149]: I0313 13:12:16.867795 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:12:16.951962 master-0 kubenswrapper[28149]: I0313 13:12:16.951893 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-config\") pod \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " Mar 13 13:12:16.953704 master-0 kubenswrapper[28149]: I0313 13:12:16.953628 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-httpd-config\") pod \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " Mar 13 13:12:16.959954 master-0 kubenswrapper[28149]: I0313 13:12:16.954357 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-combined-ca-bundle\") pod \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " Mar 13 13:12:16.959954 master-0 kubenswrapper[28149]: I0313 13:12:16.954432 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l2k2\" (UniqueName: \"kubernetes.io/projected/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-kube-api-access-5l2k2\") pod \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " Mar 13 13:12:16.959954 master-0 kubenswrapper[28149]: I0313 13:12:16.954535 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-ovndb-tls-certs\") pod \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\" (UID: \"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f\") " Mar 13 13:12:16.981177 master-0 kubenswrapper[28149]: I0313 13:12:16.978668 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5b4d4977b6-gw8dl" podStartSLOduration=2.9786384740000003 podStartE2EDuration="2.978638474s" podCreationTimestamp="2026-03-13 13:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:12:16.943659722 +0000 UTC m=+1110.597124871" watchObservedRunningTime="2026-03-13 13:12:16.978638474 +0000 UTC m=+1110.632103633" Mar 13 13:12:16.992333 master-0 kubenswrapper[28149]: I0313 13:12:16.990838 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" (UID: "c8579a3e-7e92-42d9-b21f-6339bc1ebb4f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:17.020278 master-0 kubenswrapper[28149]: I0313 13:12:17.017472 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-kube-api-access-5l2k2" (OuterVolumeSpecName: "kube-api-access-5l2k2") pod "c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" (UID: "c8579a3e-7e92-42d9-b21f-6339bc1ebb4f"). InnerVolumeSpecName "kube-api-access-5l2k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:12:17.062165 master-0 kubenswrapper[28149]: I0313 13:12:17.057958 28149 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-httpd-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:17.062165 master-0 kubenswrapper[28149]: I0313 13:12:17.058002 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l2k2\" (UniqueName: \"kubernetes.io/projected/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-kube-api-access-5l2k2\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:17.080006 master-0 kubenswrapper[28149]: I0313 13:12:17.076352 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" (UID: "c8579a3e-7e92-42d9-b21f-6339bc1ebb4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:17.167938 master-0 kubenswrapper[28149]: I0313 13:12:17.164660 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:17.173399 master-0 kubenswrapper[28149]: I0313 13:12:17.173279 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" (UID: "c8579a3e-7e92-42d9-b21f-6339bc1ebb4f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:17.189211 master-0 kubenswrapper[28149]: I0313 13:12:17.183837 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-config" (OuterVolumeSpecName: "config") pod "c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" (UID: "c8579a3e-7e92-42d9-b21f-6339bc1ebb4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:17.271500 master-0 kubenswrapper[28149]: I0313 13:12:17.270329 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:17.271500 master-0 kubenswrapper[28149]: I0313 13:12:17.270373 28149 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:17.791688 master-0 kubenswrapper[28149]: I0313 13:12:17.791615 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-768869957b-ffkcl" event={"ID":"c8579a3e-7e92-42d9-b21f-6339bc1ebb4f","Type":"ContainerDied","Data":"bf016be6458a18e3342705fea2bcb0451634318fa601a0baea748d184bf63547"} Mar 13 13:12:17.791688 master-0 kubenswrapper[28149]: I0313 13:12:17.791691 28149 scope.go:117] "RemoveContainer" containerID="656b9643de464c1cf24dc958794267f1f16c7713d8cd39047d8a4b7430c00e0f" Mar 13 13:12:17.792795 master-0 kubenswrapper[28149]: I0313 13:12:17.791956 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-768869957b-ffkcl" Mar 13 13:12:17.809291 master-0 kubenswrapper[28149]: I0313 13:12:17.806442 28149 scope.go:117] "RemoveContainer" containerID="9da47da0c1d68defafda498c9391044c1b2f679960a3bb4597aeef1033c4cce3" Mar 13 13:12:17.809291 master-0 kubenswrapper[28149]: E0313 13:12:17.806734 28149 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-6cf5cb77b5-nrxbr_openstack(d6997886-21ba-4767-a3f9-82bb99c7c39a)\"" pod="openstack/ironic-6cf5cb77b5-nrxbr" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" Mar 13 13:12:17.840725 master-0 kubenswrapper[28149]: I0313 13:12:17.836461 28149 scope.go:117] "RemoveContainer" containerID="12eeb73a39f53d8e14eda8ec5a01ccf4ca5f504668906bb2b70963fdeddd747e" Mar 13 13:12:17.876734 master-0 kubenswrapper[28149]: I0313 13:12:17.876678 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-768869957b-ffkcl"] Mar 13 13:12:17.888252 master-0 kubenswrapper[28149]: I0313 13:12:17.888086 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-768869957b-ffkcl"] Mar 13 13:12:18.646606 master-0 kubenswrapper[28149]: I0313 13:12:18.646381 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-4wfp7"] Mar 13 13:12:18.646961 master-0 kubenswrapper[28149]: E0313 13:12:18.646938 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" containerName="neutron-httpd" Mar 13 13:12:18.646961 master-0 kubenswrapper[28149]: I0313 13:12:18.646957 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" containerName="neutron-httpd" Mar 13 13:12:18.647162 master-0 kubenswrapper[28149]: E0313 13:12:18.647015 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" containerName="neutron-api" Mar 13 13:12:18.647162 master-0 kubenswrapper[28149]: I0313 13:12:18.647024 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" containerName="neutron-api" Mar 13 13:12:18.647645 master-0 kubenswrapper[28149]: I0313 13:12:18.647538 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" containerName="neutron-httpd" Mar 13 13:12:18.647645 master-0 kubenswrapper[28149]: I0313 13:12:18.647585 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" containerName="neutron-api" Mar 13 13:12:18.663853 master-0 kubenswrapper[28149]: I0313 13:12:18.654914 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.666405 master-0 kubenswrapper[28149]: I0313 13:12:18.666202 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 13 13:12:18.674367 master-0 kubenswrapper[28149]: I0313 13:12:18.674328 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 13 13:12:18.742802 master-0 kubenswrapper[28149]: I0313 13:12:18.742602 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-combined-ca-bundle\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.742802 master-0 kubenswrapper[28149]: I0313 13:12:18.742703 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-scripts\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.742802 master-0 kubenswrapper[28149]: I0313 13:12:18.742787 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-config\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.743121 master-0 kubenswrapper[28149]: I0313 13:12:18.742870 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3181ad18-51bf-4620-b629-5e1a05bab0e0-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.743121 master-0 kubenswrapper[28149]: I0313 13:12:18.742928 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3181ad18-51bf-4620-b629-5e1a05bab0e0-var-lib-ironic\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.743121 master-0 kubenswrapper[28149]: I0313 13:12:18.743036 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3181ad18-51bf-4620-b629-5e1a05bab0e0-etc-podinfo\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.743121 master-0 kubenswrapper[28149]: I0313 13:12:18.743110 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrq9m\" (UniqueName: \"kubernetes.io/projected/3181ad18-51bf-4620-b629-5e1a05bab0e0-kube-api-access-nrq9m\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.773220 master-0 kubenswrapper[28149]: I0313 13:12:18.773106 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8579a3e-7e92-42d9-b21f-6339bc1ebb4f" path="/var/lib/kubelet/pods/c8579a3e-7e92-42d9-b21f-6339bc1ebb4f/volumes" Mar 13 13:12:18.775688 master-0 kubenswrapper[28149]: I0313 13:12:18.774208 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-4wfp7"] Mar 13 13:12:18.836989 master-0 kubenswrapper[28149]: I0313 13:12:18.833692 28149 scope.go:117] "RemoveContainer" containerID="9da47da0c1d68defafda498c9391044c1b2f679960a3bb4597aeef1033c4cce3" Mar 13 13:12:18.837633 master-0 kubenswrapper[28149]: E0313 13:12:18.837228 28149 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-6cf5cb77b5-nrxbr_openstack(d6997886-21ba-4767-a3f9-82bb99c7c39a)\"" pod="openstack/ironic-6cf5cb77b5-nrxbr" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" Mar 13 13:12:18.847546 master-0 kubenswrapper[28149]: I0313 13:12:18.847150 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-combined-ca-bundle\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.847546 master-0 kubenswrapper[28149]: I0313 13:12:18.847296 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-scripts\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.847546 master-0 kubenswrapper[28149]: I0313 13:12:18.847386 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-config\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.847546 master-0 kubenswrapper[28149]: I0313 13:12:18.847518 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3181ad18-51bf-4620-b629-5e1a05bab0e0-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.848250 master-0 kubenswrapper[28149]: I0313 13:12:18.847572 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3181ad18-51bf-4620-b629-5e1a05bab0e0-var-lib-ironic\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.848250 master-0 kubenswrapper[28149]: I0313 13:12:18.847723 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3181ad18-51bf-4620-b629-5e1a05bab0e0-etc-podinfo\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.848250 master-0 kubenswrapper[28149]: I0313 13:12:18.847859 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrq9m\" (UniqueName: \"kubernetes.io/projected/3181ad18-51bf-4620-b629-5e1a05bab0e0-kube-api-access-nrq9m\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.848869 master-0 kubenswrapper[28149]: I0313 13:12:18.848793 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3181ad18-51bf-4620-b629-5e1a05bab0e0-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.848869 master-0 kubenswrapper[28149]: I0313 13:12:18.848860 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3181ad18-51bf-4620-b629-5e1a05bab0e0-var-lib-ironic\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.853690 master-0 kubenswrapper[28149]: I0313 13:12:18.852898 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3181ad18-51bf-4620-b629-5e1a05bab0e0-etc-podinfo\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.858055 master-0 kubenswrapper[28149]: I0313 13:12:18.857676 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-scripts\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.861572 master-0 kubenswrapper[28149]: I0313 13:12:18.861483 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-combined-ca-bundle\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.864835 master-0 kubenswrapper[28149]: I0313 13:12:18.864611 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-config\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:18.884830 master-0 kubenswrapper[28149]: I0313 13:12:18.884516 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrq9m\" (UniqueName: \"kubernetes.io/projected/3181ad18-51bf-4620-b629-5e1a05bab0e0-kube-api-access-nrq9m\") pod \"ironic-inspector-db-sync-4wfp7\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:19.044368 master-0 kubenswrapper[28149]: I0313 13:12:19.044283 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:12:19.252497 master-0 kubenswrapper[28149]: I0313 13:12:19.252117 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-58d4ff778c-wbbt4" Mar 13 13:12:19.360066 master-0 kubenswrapper[28149]: I0313 13:12:19.356066 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-6cf5cb77b5-nrxbr"] Mar 13 13:12:19.721447 master-0 kubenswrapper[28149]: I0313 13:12:19.721396 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-4wfp7"] Mar 13 13:12:19.849118 master-0 kubenswrapper[28149]: I0313 13:12:19.848355 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-4wfp7" event={"ID":"3181ad18-51bf-4620-b629-5e1a05bab0e0","Type":"ContainerStarted","Data":"4c960a46d2ead0ef3d2b85e7f7558f0f84357c6aa4fa6cef75391e4c8ca247e1"} Mar 13 13:12:19.849118 master-0 kubenswrapper[28149]: I0313 13:12:19.848676 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-6cf5cb77b5-nrxbr" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerName="ironic-api-log" containerID="cri-o://da4edd3d9461af138752ec2a89ea72ea8d44f9669fe84abb321900c8f9ecf741" gracePeriod=60 Mar 13 13:12:20.450570 master-0 kubenswrapper[28149]: I0313 13:12:20.450521 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:12:20.713605 master-0 kubenswrapper[28149]: I0313 13:12:20.713334 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-scripts\") pod \"d6997886-21ba-4767-a3f9-82bb99c7c39a\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " Mar 13 13:12:20.713605 master-0 kubenswrapper[28149]: I0313 13:12:20.713411 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-combined-ca-bundle\") pod \"d6997886-21ba-4767-a3f9-82bb99c7c39a\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " Mar 13 13:12:20.713605 master-0 kubenswrapper[28149]: I0313 13:12:20.713488 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6997886-21ba-4767-a3f9-82bb99c7c39a-logs\") pod \"d6997886-21ba-4767-a3f9-82bb99c7c39a\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " Mar 13 13:12:20.714108 master-0 kubenswrapper[28149]: I0313 13:12:20.713662 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data\") pod \"d6997886-21ba-4767-a3f9-82bb99c7c39a\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " Mar 13 13:12:20.714108 master-0 kubenswrapper[28149]: I0313 13:12:20.713723 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d6997886-21ba-4767-a3f9-82bb99c7c39a-etc-podinfo\") pod \"d6997886-21ba-4767-a3f9-82bb99c7c39a\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " Mar 13 13:12:20.714108 master-0 kubenswrapper[28149]: I0313 13:12:20.713821 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data-custom\") pod \"d6997886-21ba-4767-a3f9-82bb99c7c39a\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " Mar 13 13:12:20.714108 master-0 kubenswrapper[28149]: I0313 13:12:20.713868 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpbw6\" (UniqueName: \"kubernetes.io/projected/d6997886-21ba-4767-a3f9-82bb99c7c39a-kube-api-access-kpbw6\") pod \"d6997886-21ba-4767-a3f9-82bb99c7c39a\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " Mar 13 13:12:20.714108 master-0 kubenswrapper[28149]: I0313 13:12:20.713974 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data-merged\") pod \"d6997886-21ba-4767-a3f9-82bb99c7c39a\" (UID: \"d6997886-21ba-4767-a3f9-82bb99c7c39a\") " Mar 13 13:12:20.714394 master-0 kubenswrapper[28149]: I0313 13:12:20.714316 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6997886-21ba-4767-a3f9-82bb99c7c39a-logs" (OuterVolumeSpecName: "logs") pod "d6997886-21ba-4767-a3f9-82bb99c7c39a" (UID: "d6997886-21ba-4767-a3f9-82bb99c7c39a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:12:20.715272 master-0 kubenswrapper[28149]: I0313 13:12:20.715241 28149 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6997886-21ba-4767-a3f9-82bb99c7c39a-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:20.720174 master-0 kubenswrapper[28149]: I0313 13:12:20.717989 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "d6997886-21ba-4767-a3f9-82bb99c7c39a" (UID: "d6997886-21ba-4767-a3f9-82bb99c7c39a"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:12:20.737337 master-0 kubenswrapper[28149]: I0313 13:12:20.728557 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6997886-21ba-4767-a3f9-82bb99c7c39a-kube-api-access-kpbw6" (OuterVolumeSpecName: "kube-api-access-kpbw6") pod "d6997886-21ba-4767-a3f9-82bb99c7c39a" (UID: "d6997886-21ba-4767-a3f9-82bb99c7c39a"). InnerVolumeSpecName "kube-api-access-kpbw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:12:20.737337 master-0 kubenswrapper[28149]: I0313 13:12:20.731376 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d6997886-21ba-4767-a3f9-82bb99c7c39a-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "d6997886-21ba-4767-a3f9-82bb99c7c39a" (UID: "d6997886-21ba-4767-a3f9-82bb99c7c39a"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 13 13:12:20.737337 master-0 kubenswrapper[28149]: I0313 13:12:20.731580 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-scripts" (OuterVolumeSpecName: "scripts") pod "d6997886-21ba-4767-a3f9-82bb99c7c39a" (UID: "d6997886-21ba-4767-a3f9-82bb99c7c39a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:20.737337 master-0 kubenswrapper[28149]: I0313 13:12:20.731916 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d6997886-21ba-4767-a3f9-82bb99c7c39a" (UID: "d6997886-21ba-4767-a3f9-82bb99c7c39a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:20.752179 master-0 kubenswrapper[28149]: E0313 13:12:20.740012 28149 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb is running failed: container process not found" containerID="716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb" cmd=["/bin/true"] Mar 13 13:12:20.752179 master-0 kubenswrapper[28149]: E0313 13:12:20.743562 28149 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb is running failed: container process not found" containerID="716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb" cmd=["/bin/true"] Mar 13 13:12:20.752179 master-0 kubenswrapper[28149]: E0313 13:12:20.743633 28149 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb is running failed: container process not found" containerID="716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb" cmd=["/bin/true"] Mar 13 13:12:20.763156 master-0 kubenswrapper[28149]: E0313 13:12:20.757535 28149 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb is running failed: container process not found" containerID="716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb" cmd=["/bin/true"] Mar 13 13:12:20.763156 master-0 kubenswrapper[28149]: E0313 13:12:20.757623 28149 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb is running failed: container process not found" probeType="Readiness" pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" podUID="52f4f9dd-4956-4c8b-9a8d-c832a8049c3a" containerName="ironic-neutron-agent" Mar 13 13:12:20.763156 master-0 kubenswrapper[28149]: E0313 13:12:20.757688 28149 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb is running failed: container process not found" containerID="716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb" cmd=["/bin/true"] Mar 13 13:12:20.767205 master-0 kubenswrapper[28149]: E0313 13:12:20.765906 28149 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb is running failed: container process not found" containerID="716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb" cmd=["/bin/true"] Mar 13 13:12:20.767205 master-0 kubenswrapper[28149]: E0313 13:12:20.765995 28149 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb is running failed: container process not found" probeType="Liveness" pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" podUID="52f4f9dd-4956-4c8b-9a8d-c832a8049c3a" containerName="ironic-neutron-agent" Mar 13 13:12:20.819189 master-0 kubenswrapper[28149]: I0313 13:12:20.817763 28149 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d6997886-21ba-4767-a3f9-82bb99c7c39a-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:20.819189 master-0 kubenswrapper[28149]: I0313 13:12:20.817811 28149 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:20.819189 master-0 kubenswrapper[28149]: I0313 13:12:20.817823 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpbw6\" (UniqueName: \"kubernetes.io/projected/d6997886-21ba-4767-a3f9-82bb99c7c39a-kube-api-access-kpbw6\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:20.819189 master-0 kubenswrapper[28149]: I0313 13:12:20.817836 28149 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:20.819189 master-0 kubenswrapper[28149]: I0313 13:12:20.817844 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:20.872475 master-0 kubenswrapper[28149]: I0313 13:12:20.872398 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data" (OuterVolumeSpecName: "config-data") pod "d6997886-21ba-4767-a3f9-82bb99c7c39a" (UID: "d6997886-21ba-4767-a3f9-82bb99c7c39a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:20.891501 master-0 kubenswrapper[28149]: I0313 13:12:20.891436 28149 generic.go:334] "Generic (PLEG): container finished" podID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerID="da4edd3d9461af138752ec2a89ea72ea8d44f9669fe84abb321900c8f9ecf741" exitCode=143 Mar 13 13:12:20.891949 master-0 kubenswrapper[28149]: I0313 13:12:20.891584 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cf5cb77b5-nrxbr" event={"ID":"d6997886-21ba-4767-a3f9-82bb99c7c39a","Type":"ContainerDied","Data":"da4edd3d9461af138752ec2a89ea72ea8d44f9669fe84abb321900c8f9ecf741"} Mar 13 13:12:20.891949 master-0 kubenswrapper[28149]: I0313 13:12:20.891648 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cf5cb77b5-nrxbr" event={"ID":"d6997886-21ba-4767-a3f9-82bb99c7c39a","Type":"ContainerDied","Data":"b5a57f19d5db8dcaa5cc2f8fcced92aa060fec9baa20abcc56e29135193d0088"} Mar 13 13:12:20.891949 master-0 kubenswrapper[28149]: I0313 13:12:20.891743 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6cf5cb77b5-nrxbr" Mar 13 13:12:20.891949 master-0 kubenswrapper[28149]: I0313 13:12:20.891881 28149 scope.go:117] "RemoveContainer" containerID="9da47da0c1d68defafda498c9391044c1b2f679960a3bb4597aeef1033c4cce3" Mar 13 13:12:20.896642 master-0 kubenswrapper[28149]: I0313 13:12:20.896554 28149 generic.go:334] "Generic (PLEG): container finished" podID="52f4f9dd-4956-4c8b-9a8d-c832a8049c3a" containerID="716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb" exitCode=1 Mar 13 13:12:20.896642 master-0 kubenswrapper[28149]: I0313 13:12:20.896631 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" event={"ID":"52f4f9dd-4956-4c8b-9a8d-c832a8049c3a","Type":"ContainerDied","Data":"716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb"} Mar 13 13:12:20.898943 master-0 kubenswrapper[28149]: I0313 13:12:20.898425 28149 scope.go:117] "RemoveContainer" containerID="716c7266d1a65297f2c11d428adef1d649e2162559ae5398d63184935ba678fb" Mar 13 13:12:20.907793 master-0 kubenswrapper[28149]: I0313 13:12:20.907737 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6997886-21ba-4767-a3f9-82bb99c7c39a" (UID: "d6997886-21ba-4767-a3f9-82bb99c7c39a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:12:20.947412 master-0 kubenswrapper[28149]: I0313 13:12:20.947355 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:20.947412 master-0 kubenswrapper[28149]: I0313 13:12:20.947403 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6997886-21ba-4767-a3f9-82bb99c7c39a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:21.055459 master-0 kubenswrapper[28149]: I0313 13:12:21.053738 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" Mar 13 13:12:21.355730 master-0 kubenswrapper[28149]: I0313 13:12:21.354828 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-6cf5cb77b5-nrxbr"] Mar 13 13:12:21.376846 master-0 kubenswrapper[28149]: I0313 13:12:21.376724 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-6cf5cb77b5-nrxbr"] Mar 13 13:12:22.733306 master-0 kubenswrapper[28149]: I0313 13:12:22.733244 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" path="/var/lib/kubelet/pods/d6997886-21ba-4767-a3f9-82bb99c7c39a/volumes" Mar 13 13:12:24.772675 master-0 kubenswrapper[28149]: I0313 13:12:24.772622 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:24.779119 master-0 kubenswrapper[28149]: I0313 13:12:24.779065 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5b4d4977b6-gw8dl" Mar 13 13:12:25.740214 master-0 kubenswrapper[28149]: I0313 13:12:25.740118 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:12:25.741789 master-0 kubenswrapper[28149]: I0313 13:12:25.741714 28149 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:12:28.488981 master-0 kubenswrapper[28149]: I0313 13:12:28.488820 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-6vxdg"] Mar 13 13:12:28.513267 master-0 kubenswrapper[28149]: E0313 13:12:28.508666 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerName="ironic-api-log" Mar 13 13:12:28.513267 master-0 kubenswrapper[28149]: I0313 13:12:28.508729 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerName="ironic-api-log" Mar 13 13:12:28.513267 master-0 kubenswrapper[28149]: E0313 13:12:28.508750 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerName="init" Mar 13 13:12:28.513267 master-0 kubenswrapper[28149]: I0313 13:12:28.508758 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerName="init" Mar 13 13:12:28.513267 master-0 kubenswrapper[28149]: E0313 13:12:28.508778 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerName="ironic-api" Mar 13 13:12:28.513267 master-0 kubenswrapper[28149]: I0313 13:12:28.508784 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerName="ironic-api" Mar 13 13:12:28.513267 master-0 kubenswrapper[28149]: E0313 13:12:28.508843 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerName="ironic-api" Mar 13 13:12:28.513267 master-0 kubenswrapper[28149]: I0313 13:12:28.508850 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerName="ironic-api" Mar 13 13:12:28.513267 master-0 kubenswrapper[28149]: I0313 13:12:28.509242 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerName="ironic-api" Mar 13 13:12:28.513267 master-0 kubenswrapper[28149]: I0313 13:12:28.509298 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerName="ironic-api" Mar 13 13:12:28.513267 master-0 kubenswrapper[28149]: I0313 13:12:28.509336 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6997886-21ba-4767-a3f9-82bb99c7c39a" containerName="ironic-api-log" Mar 13 13:12:28.513267 master-0 kubenswrapper[28149]: I0313 13:12:28.510581 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6vxdg" Mar 13 13:12:28.654173 master-0 kubenswrapper[28149]: I0313 13:12:28.648187 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6vxdg"] Mar 13 13:12:28.960940 master-0 kubenswrapper[28149]: I0313 13:12:28.712499 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlkr9\" (UniqueName: \"kubernetes.io/projected/d44088e4-d747-49f2-a892-7b20ce2adac5-kube-api-access-zlkr9\") pod \"nova-api-db-create-6vxdg\" (UID: \"d44088e4-d747-49f2-a892-7b20ce2adac5\") " pod="openstack/nova-api-db-create-6vxdg" Mar 13 13:12:28.960940 master-0 kubenswrapper[28149]: I0313 13:12:28.958788 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44088e4-d747-49f2-a892-7b20ce2adac5-operator-scripts\") pod \"nova-api-db-create-6vxdg\" (UID: \"d44088e4-d747-49f2-a892-7b20ce2adac5\") " pod="openstack/nova-api-db-create-6vxdg" Mar 13 13:12:28.993378 master-0 kubenswrapper[28149]: I0313 13:12:28.993202 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-rvfxr"] Mar 13 13:12:28.997895 master-0 kubenswrapper[28149]: I0313 13:12:28.997826 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rvfxr" Mar 13 13:12:29.038181 master-0 kubenswrapper[28149]: I0313 13:12:28.997474 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rvfxr"] Mar 13 13:12:29.061822 master-0 kubenswrapper[28149]: I0313 13:12:29.061393 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlkr9\" (UniqueName: \"kubernetes.io/projected/d44088e4-d747-49f2-a892-7b20ce2adac5-kube-api-access-zlkr9\") pod \"nova-api-db-create-6vxdg\" (UID: \"d44088e4-d747-49f2-a892-7b20ce2adac5\") " pod="openstack/nova-api-db-create-6vxdg" Mar 13 13:12:29.061822 master-0 kubenswrapper[28149]: I0313 13:12:29.061797 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44088e4-d747-49f2-a892-7b20ce2adac5-operator-scripts\") pod \"nova-api-db-create-6vxdg\" (UID: \"d44088e4-d747-49f2-a892-7b20ce2adac5\") " pod="openstack/nova-api-db-create-6vxdg" Mar 13 13:12:29.063642 master-0 kubenswrapper[28149]: I0313 13:12:29.062951 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44088e4-d747-49f2-a892-7b20ce2adac5-operator-scripts\") pod \"nova-api-db-create-6vxdg\" (UID: \"d44088e4-d747-49f2-a892-7b20ce2adac5\") " pod="openstack/nova-api-db-create-6vxdg" Mar 13 13:12:29.156401 master-0 kubenswrapper[28149]: I0313 13:12:29.156306 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-924b-account-create-update-nsjdj"] Mar 13 13:12:29.203191 master-0 kubenswrapper[28149]: I0313 13:12:29.165763 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42738462-ac51-4b8d-a688-c67389ad111b-operator-scripts\") pod \"nova-cell0-db-create-rvfxr\" (UID: \"42738462-ac51-4b8d-a688-c67389ad111b\") " pod="openstack/nova-cell0-db-create-rvfxr" Mar 13 13:12:29.203191 master-0 kubenswrapper[28149]: I0313 13:12:29.201887 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgstb\" (UniqueName: \"kubernetes.io/projected/42738462-ac51-4b8d-a688-c67389ad111b-kube-api-access-bgstb\") pod \"nova-cell0-db-create-rvfxr\" (UID: \"42738462-ac51-4b8d-a688-c67389ad111b\") " pod="openstack/nova-cell0-db-create-rvfxr" Mar 13 13:12:29.203191 master-0 kubenswrapper[28149]: I0313 13:12:29.188901 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlkr9\" (UniqueName: \"kubernetes.io/projected/d44088e4-d747-49f2-a892-7b20ce2adac5-kube-api-access-zlkr9\") pod \"nova-api-db-create-6vxdg\" (UID: \"d44088e4-d747-49f2-a892-7b20ce2adac5\") " pod="openstack/nova-api-db-create-6vxdg" Mar 13 13:12:29.203191 master-0 kubenswrapper[28149]: I0313 13:12:29.172482 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-924b-account-create-update-nsjdj" Mar 13 13:12:29.236381 master-0 kubenswrapper[28149]: I0313 13:12:29.231004 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 13 13:12:29.297498 master-0 kubenswrapper[28149]: I0313 13:12:29.290285 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-924b-account-create-update-nsjdj"] Mar 13 13:12:29.314585 master-0 kubenswrapper[28149]: I0313 13:12:29.314522 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6g5cr"] Mar 13 13:12:29.317085 master-0 kubenswrapper[28149]: I0313 13:12:29.317028 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6g5cr" Mar 13 13:12:29.318873 master-0 kubenswrapper[28149]: I0313 13:12:29.318741 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h9vb\" (UniqueName: \"kubernetes.io/projected/6b553d13-8815-4ada-a5a6-014839b8be7a-kube-api-access-2h9vb\") pod \"nova-api-924b-account-create-update-nsjdj\" (UID: \"6b553d13-8815-4ada-a5a6-014839b8be7a\") " pod="openstack/nova-api-924b-account-create-update-nsjdj" Mar 13 13:12:29.321163 master-0 kubenswrapper[28149]: I0313 13:12:29.318873 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42738462-ac51-4b8d-a688-c67389ad111b-operator-scripts\") pod \"nova-cell0-db-create-rvfxr\" (UID: \"42738462-ac51-4b8d-a688-c67389ad111b\") " pod="openstack/nova-cell0-db-create-rvfxr" Mar 13 13:12:29.321163 master-0 kubenswrapper[28149]: I0313 13:12:29.320343 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b553d13-8815-4ada-a5a6-014839b8be7a-operator-scripts\") pod \"nova-api-924b-account-create-update-nsjdj\" (UID: \"6b553d13-8815-4ada-a5a6-014839b8be7a\") " pod="openstack/nova-api-924b-account-create-update-nsjdj" Mar 13 13:12:29.321163 master-0 kubenswrapper[28149]: I0313 13:12:29.320899 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgstb\" (UniqueName: \"kubernetes.io/projected/42738462-ac51-4b8d-a688-c67389ad111b-kube-api-access-bgstb\") pod \"nova-cell0-db-create-rvfxr\" (UID: \"42738462-ac51-4b8d-a688-c67389ad111b\") " pod="openstack/nova-cell0-db-create-rvfxr" Mar 13 13:12:29.321510 master-0 kubenswrapper[28149]: I0313 13:12:29.321452 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42738462-ac51-4b8d-a688-c67389ad111b-operator-scripts\") pod \"nova-cell0-db-create-rvfxr\" (UID: \"42738462-ac51-4b8d-a688-c67389ad111b\") " pod="openstack/nova-cell0-db-create-rvfxr" Mar 13 13:12:29.350170 master-0 kubenswrapper[28149]: I0313 13:12:29.350098 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgstb\" (UniqueName: \"kubernetes.io/projected/42738462-ac51-4b8d-a688-c67389ad111b-kube-api-access-bgstb\") pod \"nova-cell0-db-create-rvfxr\" (UID: \"42738462-ac51-4b8d-a688-c67389ad111b\") " pod="openstack/nova-cell0-db-create-rvfxr" Mar 13 13:12:29.414685 master-0 kubenswrapper[28149]: I0313 13:12:29.414595 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6g5cr"] Mar 13 13:12:29.416981 master-0 kubenswrapper[28149]: I0313 13:12:29.416924 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rvfxr" Mar 13 13:12:29.455824 master-0 kubenswrapper[28149]: I0313 13:12:29.455721 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nwwr\" (UniqueName: \"kubernetes.io/projected/1f6be9e1-66cf-49d1-8c97-37f09d594892-kube-api-access-4nwwr\") pod \"nova-cell1-db-create-6g5cr\" (UID: \"1f6be9e1-66cf-49d1-8c97-37f09d594892\") " pod="openstack/nova-cell1-db-create-6g5cr" Mar 13 13:12:29.456246 master-0 kubenswrapper[28149]: I0313 13:12:29.456194 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f6be9e1-66cf-49d1-8c97-37f09d594892-operator-scripts\") pod \"nova-cell1-db-create-6g5cr\" (UID: \"1f6be9e1-66cf-49d1-8c97-37f09d594892\") " pod="openstack/nova-cell1-db-create-6g5cr" Mar 13 13:12:29.456453 master-0 kubenswrapper[28149]: I0313 13:12:29.456375 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h9vb\" (UniqueName: \"kubernetes.io/projected/6b553d13-8815-4ada-a5a6-014839b8be7a-kube-api-access-2h9vb\") pod \"nova-api-924b-account-create-update-nsjdj\" (UID: \"6b553d13-8815-4ada-a5a6-014839b8be7a\") " pod="openstack/nova-api-924b-account-create-update-nsjdj" Mar 13 13:12:29.456795 master-0 kubenswrapper[28149]: I0313 13:12:29.456641 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b553d13-8815-4ada-a5a6-014839b8be7a-operator-scripts\") pod \"nova-api-924b-account-create-update-nsjdj\" (UID: \"6b553d13-8815-4ada-a5a6-014839b8be7a\") " pod="openstack/nova-api-924b-account-create-update-nsjdj" Mar 13 13:12:29.459347 master-0 kubenswrapper[28149]: I0313 13:12:29.459303 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b553d13-8815-4ada-a5a6-014839b8be7a-operator-scripts\") pod \"nova-api-924b-account-create-update-nsjdj\" (UID: \"6b553d13-8815-4ada-a5a6-014839b8be7a\") " pod="openstack/nova-api-924b-account-create-update-nsjdj" Mar 13 13:12:29.493537 master-0 kubenswrapper[28149]: I0313 13:12:29.493359 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6vxdg" Mar 13 13:12:29.508199 master-0 kubenswrapper[28149]: I0313 13:12:29.501443 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h9vb\" (UniqueName: \"kubernetes.io/projected/6b553d13-8815-4ada-a5a6-014839b8be7a-kube-api-access-2h9vb\") pod \"nova-api-924b-account-create-update-nsjdj\" (UID: \"6b553d13-8815-4ada-a5a6-014839b8be7a\") " pod="openstack/nova-api-924b-account-create-update-nsjdj" Mar 13 13:12:29.558183 master-0 kubenswrapper[28149]: I0313 13:12:29.558108 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9f84-account-create-update-p9z6g"] Mar 13 13:12:29.558970 master-0 kubenswrapper[28149]: I0313 13:12:29.558902 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nwwr\" (UniqueName: \"kubernetes.io/projected/1f6be9e1-66cf-49d1-8c97-37f09d594892-kube-api-access-4nwwr\") pod \"nova-cell1-db-create-6g5cr\" (UID: \"1f6be9e1-66cf-49d1-8c97-37f09d594892\") " pod="openstack/nova-cell1-db-create-6g5cr" Mar 13 13:12:29.559252 master-0 kubenswrapper[28149]: I0313 13:12:29.559042 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f6be9e1-66cf-49d1-8c97-37f09d594892-operator-scripts\") pod \"nova-cell1-db-create-6g5cr\" (UID: \"1f6be9e1-66cf-49d1-8c97-37f09d594892\") " pod="openstack/nova-cell1-db-create-6g5cr" Mar 13 13:12:29.560024 master-0 kubenswrapper[28149]: I0313 13:12:29.559801 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f6be9e1-66cf-49d1-8c97-37f09d594892-operator-scripts\") pod \"nova-cell1-db-create-6g5cr\" (UID: \"1f6be9e1-66cf-49d1-8c97-37f09d594892\") " pod="openstack/nova-cell1-db-create-6g5cr" Mar 13 13:12:29.563910 master-0 kubenswrapper[28149]: I0313 13:12:29.563875 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9f84-account-create-update-p9z6g" Mar 13 13:12:29.567974 master-0 kubenswrapper[28149]: I0313 13:12:29.567578 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 13 13:12:29.584089 master-0 kubenswrapper[28149]: I0313 13:12:29.584030 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nwwr\" (UniqueName: \"kubernetes.io/projected/1f6be9e1-66cf-49d1-8c97-37f09d594892-kube-api-access-4nwwr\") pod \"nova-cell1-db-create-6g5cr\" (UID: \"1f6be9e1-66cf-49d1-8c97-37f09d594892\") " pod="openstack/nova-cell1-db-create-6g5cr" Mar 13 13:12:29.603686 master-0 kubenswrapper[28149]: I0313 13:12:29.602936 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9f84-account-create-update-p9z6g"] Mar 13 13:12:29.625327 master-0 kubenswrapper[28149]: I0313 13:12:29.622853 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-68ee-account-create-update-57sr5"] Mar 13 13:12:29.631174 master-0 kubenswrapper[28149]: I0313 13:12:29.630688 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-68ee-account-create-update-57sr5" Mar 13 13:12:29.635003 master-0 kubenswrapper[28149]: I0313 13:12:29.634338 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-924b-account-create-update-nsjdj" Mar 13 13:12:29.637472 master-0 kubenswrapper[28149]: I0313 13:12:29.636752 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 13 13:12:29.640940 master-0 kubenswrapper[28149]: I0313 13:12:29.640717 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-68ee-account-create-update-57sr5"] Mar 13 13:12:29.661906 master-0 kubenswrapper[28149]: I0313 13:12:29.661815 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bae1fb0-1850-4689-b3ec-018c4a08917f-operator-scripts\") pod \"nova-cell0-68ee-account-create-update-57sr5\" (UID: \"7bae1fb0-1850-4689-b3ec-018c4a08917f\") " pod="openstack/nova-cell0-68ee-account-create-update-57sr5" Mar 13 13:12:29.661906 master-0 kubenswrapper[28149]: I0313 13:12:29.661896 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbs6l\" (UniqueName: \"kubernetes.io/projected/84d745fd-adcc-4790-b117-fca53bf18ac1-kube-api-access-hbs6l\") pod \"nova-cell1-9f84-account-create-update-p9z6g\" (UID: \"84d745fd-adcc-4790-b117-fca53bf18ac1\") " pod="openstack/nova-cell1-9f84-account-create-update-p9z6g" Mar 13 13:12:29.662258 master-0 kubenswrapper[28149]: I0313 13:12:29.661999 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84d745fd-adcc-4790-b117-fca53bf18ac1-operator-scripts\") pod \"nova-cell1-9f84-account-create-update-p9z6g\" (UID: \"84d745fd-adcc-4790-b117-fca53bf18ac1\") " pod="openstack/nova-cell1-9f84-account-create-update-p9z6g" Mar 13 13:12:29.662258 master-0 kubenswrapper[28149]: I0313 13:12:29.662044 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk7vn\" (UniqueName: \"kubernetes.io/projected/7bae1fb0-1850-4689-b3ec-018c4a08917f-kube-api-access-lk7vn\") pod \"nova-cell0-68ee-account-create-update-57sr5\" (UID: \"7bae1fb0-1850-4689-b3ec-018c4a08917f\") " pod="openstack/nova-cell0-68ee-account-create-update-57sr5" Mar 13 13:12:29.740559 master-0 kubenswrapper[28149]: I0313 13:12:29.740505 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6g5cr" Mar 13 13:12:29.767120 master-0 kubenswrapper[28149]: I0313 13:12:29.765981 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84d745fd-adcc-4790-b117-fca53bf18ac1-operator-scripts\") pod \"nova-cell1-9f84-account-create-update-p9z6g\" (UID: \"84d745fd-adcc-4790-b117-fca53bf18ac1\") " pod="openstack/nova-cell1-9f84-account-create-update-p9z6g" Mar 13 13:12:29.767392 master-0 kubenswrapper[28149]: I0313 13:12:29.767207 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk7vn\" (UniqueName: \"kubernetes.io/projected/7bae1fb0-1850-4689-b3ec-018c4a08917f-kube-api-access-lk7vn\") pod \"nova-cell0-68ee-account-create-update-57sr5\" (UID: \"7bae1fb0-1850-4689-b3ec-018c4a08917f\") " pod="openstack/nova-cell0-68ee-account-create-update-57sr5" Mar 13 13:12:29.767528 master-0 kubenswrapper[28149]: I0313 13:12:29.767500 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bae1fb0-1850-4689-b3ec-018c4a08917f-operator-scripts\") pod \"nova-cell0-68ee-account-create-update-57sr5\" (UID: \"7bae1fb0-1850-4689-b3ec-018c4a08917f\") " pod="openstack/nova-cell0-68ee-account-create-update-57sr5" Mar 13 13:12:29.767581 master-0 kubenswrapper[28149]: I0313 13:12:29.767534 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbs6l\" (UniqueName: \"kubernetes.io/projected/84d745fd-adcc-4790-b117-fca53bf18ac1-kube-api-access-hbs6l\") pod \"nova-cell1-9f84-account-create-update-p9z6g\" (UID: \"84d745fd-adcc-4790-b117-fca53bf18ac1\") " pod="openstack/nova-cell1-9f84-account-create-update-p9z6g" Mar 13 13:12:29.774176 master-0 kubenswrapper[28149]: I0313 13:12:29.773722 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84d745fd-adcc-4790-b117-fca53bf18ac1-operator-scripts\") pod \"nova-cell1-9f84-account-create-update-p9z6g\" (UID: \"84d745fd-adcc-4790-b117-fca53bf18ac1\") " pod="openstack/nova-cell1-9f84-account-create-update-p9z6g" Mar 13 13:12:29.776684 master-0 kubenswrapper[28149]: I0313 13:12:29.775508 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bae1fb0-1850-4689-b3ec-018c4a08917f-operator-scripts\") pod \"nova-cell0-68ee-account-create-update-57sr5\" (UID: \"7bae1fb0-1850-4689-b3ec-018c4a08917f\") " pod="openstack/nova-cell0-68ee-account-create-update-57sr5" Mar 13 13:12:29.797173 master-0 kubenswrapper[28149]: I0313 13:12:29.790650 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbs6l\" (UniqueName: \"kubernetes.io/projected/84d745fd-adcc-4790-b117-fca53bf18ac1-kube-api-access-hbs6l\") pod \"nova-cell1-9f84-account-create-update-p9z6g\" (UID: \"84d745fd-adcc-4790-b117-fca53bf18ac1\") " pod="openstack/nova-cell1-9f84-account-create-update-p9z6g" Mar 13 13:12:29.799232 master-0 kubenswrapper[28149]: I0313 13:12:29.799153 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk7vn\" (UniqueName: \"kubernetes.io/projected/7bae1fb0-1850-4689-b3ec-018c4a08917f-kube-api-access-lk7vn\") pod \"nova-cell0-68ee-account-create-update-57sr5\" (UID: \"7bae1fb0-1850-4689-b3ec-018c4a08917f\") " pod="openstack/nova-cell0-68ee-account-create-update-57sr5" Mar 13 13:12:30.143568 master-0 kubenswrapper[28149]: I0313 13:12:30.142668 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-68ee-account-create-update-57sr5" Mar 13 13:12:30.143568 master-0 kubenswrapper[28149]: I0313 13:12:30.142923 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9f84-account-create-update-p9z6g" Mar 13 13:12:31.318939 master-0 kubenswrapper[28149]: I0313 13:12:31.316634 28149 scope.go:117] "RemoveContainer" containerID="da4edd3d9461af138752ec2a89ea72ea8d44f9669fe84abb321900c8f9ecf741" Mar 13 13:12:40.839845 master-0 kubenswrapper[28149]: I0313 13:12:40.839741 28149 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podea4701c8-792f-4a27-948e-cc2d36ad5739"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podea4701c8-792f-4a27-948e-cc2d36ad5739] : Timed out while waiting for systemd to remove kubepods-besteffort-podea4701c8_792f_4a27_948e_cc2d36ad5739.slice" Mar 13 13:12:49.705775 master-0 kubenswrapper[28149]: I0313 13:12:49.705479 28149 scope.go:117] "RemoveContainer" containerID="cb2318dc776758162d16efe58599c50c0569785482ddae99d3e78b7fa7cc0b56" Mar 13 13:12:49.876024 master-0 kubenswrapper[28149]: I0313 13:12:49.875987 28149 scope.go:117] "RemoveContainer" containerID="9da47da0c1d68defafda498c9391044c1b2f679960a3bb4597aeef1033c4cce3" Mar 13 13:12:49.876384 master-0 kubenswrapper[28149]: E0313 13:12:49.876352 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9da47da0c1d68defafda498c9391044c1b2f679960a3bb4597aeef1033c4cce3\": container with ID starting with 9da47da0c1d68defafda498c9391044c1b2f679960a3bb4597aeef1033c4cce3 not found: ID does not exist" containerID="9da47da0c1d68defafda498c9391044c1b2f679960a3bb4597aeef1033c4cce3" Mar 13 13:12:49.876446 master-0 kubenswrapper[28149]: I0313 13:12:49.876397 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9da47da0c1d68defafda498c9391044c1b2f679960a3bb4597aeef1033c4cce3"} err="failed to get container status \"9da47da0c1d68defafda498c9391044c1b2f679960a3bb4597aeef1033c4cce3\": rpc error: code = NotFound desc = could not find container \"9da47da0c1d68defafda498c9391044c1b2f679960a3bb4597aeef1033c4cce3\": container with ID starting with 9da47da0c1d68defafda498c9391044c1b2f679960a3bb4597aeef1033c4cce3 not found: ID does not exist" Mar 13 13:12:49.876446 master-0 kubenswrapper[28149]: I0313 13:12:49.876420 28149 scope.go:117] "RemoveContainer" containerID="da4edd3d9461af138752ec2a89ea72ea8d44f9669fe84abb321900c8f9ecf741" Mar 13 13:12:49.877190 master-0 kubenswrapper[28149]: E0313 13:12:49.877155 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da4edd3d9461af138752ec2a89ea72ea8d44f9669fe84abb321900c8f9ecf741\": container with ID starting with da4edd3d9461af138752ec2a89ea72ea8d44f9669fe84abb321900c8f9ecf741 not found: ID does not exist" containerID="da4edd3d9461af138752ec2a89ea72ea8d44f9669fe84abb321900c8f9ecf741" Mar 13 13:12:49.877263 master-0 kubenswrapper[28149]: I0313 13:12:49.877212 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da4edd3d9461af138752ec2a89ea72ea8d44f9669fe84abb321900c8f9ecf741"} err="failed to get container status \"da4edd3d9461af138752ec2a89ea72ea8d44f9669fe84abb321900c8f9ecf741\": rpc error: code = NotFound desc = could not find container \"da4edd3d9461af138752ec2a89ea72ea8d44f9669fe84abb321900c8f9ecf741\": container with ID starting with da4edd3d9461af138752ec2a89ea72ea8d44f9669fe84abb321900c8f9ecf741 not found: ID does not exist" Mar 13 13:12:49.877263 master-0 kubenswrapper[28149]: I0313 13:12:49.877243 28149 scope.go:117] "RemoveContainer" containerID="cb2318dc776758162d16efe58599c50c0569785482ddae99d3e78b7fa7cc0b56" Mar 13 13:12:49.878199 master-0 kubenswrapper[28149]: E0313 13:12:49.877892 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb2318dc776758162d16efe58599c50c0569785482ddae99d3e78b7fa7cc0b56\": container with ID starting with cb2318dc776758162d16efe58599c50c0569785482ddae99d3e78b7fa7cc0b56 not found: ID does not exist" containerID="cb2318dc776758162d16efe58599c50c0569785482ddae99d3e78b7fa7cc0b56" Mar 13 13:12:49.878199 master-0 kubenswrapper[28149]: I0313 13:12:49.877928 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb2318dc776758162d16efe58599c50c0569785482ddae99d3e78b7fa7cc0b56"} err="failed to get container status \"cb2318dc776758162d16efe58599c50c0569785482ddae99d3e78b7fa7cc0b56\": rpc error: code = NotFound desc = could not find container \"cb2318dc776758162d16efe58599c50c0569785482ddae99d3e78b7fa7cc0b56\": container with ID starting with cb2318dc776758162d16efe58599c50c0569785482ddae99d3e78b7fa7cc0b56 not found: ID does not exist" Mar 13 13:12:50.657238 master-0 kubenswrapper[28149]: I0313 13:12:50.656401 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6vxdg"] Mar 13 13:12:51.202166 master-0 kubenswrapper[28149]: I0313 13:12:51.199357 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 13 13:12:51.202166 master-0 kubenswrapper[28149]: I0313 13:12:51.199363 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-68ee-account-create-update-57sr5"] Mar 13 13:12:51.221726 master-0 kubenswrapper[28149]: I0313 13:12:51.221611 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9f84-account-create-update-p9z6g"] Mar 13 13:12:51.229170 master-0 kubenswrapper[28149]: I0313 13:12:51.225923 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 13 13:12:51.229170 master-0 kubenswrapper[28149]: I0313 13:12:51.227468 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 13 13:12:51.259443 master-0 kubenswrapper[28149]: I0313 13:12:51.257414 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-924b-account-create-update-nsjdj"] Mar 13 13:12:51.387079 master-0 kubenswrapper[28149]: I0313 13:12:51.386951 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6g5cr"] Mar 13 13:12:51.512462 master-0 kubenswrapper[28149]: W0313 13:12:51.512418 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42738462_ac51_4b8d_a688_c67389ad111b.slice/crio-49a319c9ac0543d0041817617c3bd7f9526168ab7377a7deb6c55e5e8b60d409 WatchSource:0}: Error finding container 49a319c9ac0543d0041817617c3bd7f9526168ab7377a7deb6c55e5e8b60d409: Status 404 returned error can't find the container with id 49a319c9ac0543d0041817617c3bd7f9526168ab7377a7deb6c55e5e8b60d409 Mar 13 13:12:51.517485 master-0 kubenswrapper[28149]: I0313 13:12:51.516727 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rvfxr"] Mar 13 13:12:51.677581 master-0 kubenswrapper[28149]: I0313 13:12:51.675526 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-4wfp7" event={"ID":"3181ad18-51bf-4620-b629-5e1a05bab0e0","Type":"ContainerStarted","Data":"0a4a493d6ec6276529bb3825f247a2893901dcf39a3d88cbfca8045cd5b54c37"} Mar 13 13:12:51.688603 master-0 kubenswrapper[28149]: I0313 13:12:51.688557 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6vxdg" event={"ID":"d44088e4-d747-49f2-a892-7b20ce2adac5","Type":"ContainerStarted","Data":"2bd35dc7b889356fc01e19d31ef94fa15402110e07b8ffac1e9707b348c5f85f"} Mar 13 13:12:51.688762 master-0 kubenswrapper[28149]: I0313 13:12:51.688612 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6vxdg" event={"ID":"d44088e4-d747-49f2-a892-7b20ce2adac5","Type":"ContainerStarted","Data":"ae1db571d1fa2be6ce819a04eb3f50a5ff26ff8b7fc5ae146c5f13ad4bce0fba"} Mar 13 13:12:51.713641 master-0 kubenswrapper[28149]: I0313 13:12:51.712735 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8fdaa161-cf3d-465a-8e70-c2af73f96711","Type":"ContainerStarted","Data":"57af53b41f5bbaa53160b365a458c9b1dda59bdba11fa23d234cc3fdff261db2"} Mar 13 13:12:51.716019 master-0 kubenswrapper[28149]: I0313 13:12:51.715829 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-4wfp7" podStartSLOduration=3.593103839 podStartE2EDuration="33.715787325s" podCreationTimestamp="2026-03-13 13:12:18 +0000 UTC" firstStartedPulling="2026-03-13 13:12:19.716397348 +0000 UTC m=+1113.369862507" lastFinishedPulling="2026-03-13 13:12:49.839080834 +0000 UTC m=+1143.492545993" observedRunningTime="2026-03-13 13:12:51.704210773 +0000 UTC m=+1145.357675932" watchObservedRunningTime="2026-03-13 13:12:51.715787325 +0000 UTC m=+1145.369252484" Mar 13 13:12:51.725165 master-0 kubenswrapper[28149]: I0313 13:12:51.725052 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9f84-account-create-update-p9z6g" event={"ID":"84d745fd-adcc-4790-b117-fca53bf18ac1","Type":"ContainerStarted","Data":"4550dc72fb75a10d4e5b42a9c188e2632becad983fb357d8decfef903e1e28d3"} Mar 13 13:12:51.725460 master-0 kubenswrapper[28149]: I0313 13:12:51.725438 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9f84-account-create-update-p9z6g" event={"ID":"84d745fd-adcc-4790-b117-fca53bf18ac1","Type":"ContainerStarted","Data":"41135fb172404e42743d926d8d3376812d2ee394bee7966ca5dc9c2003ab52f3"} Mar 13 13:12:51.733055 master-0 kubenswrapper[28149]: I0313 13:12:51.732973 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-6vxdg" podStartSLOduration=23.732949157 podStartE2EDuration="23.732949157s" podCreationTimestamp="2026-03-13 13:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:12:51.729770632 +0000 UTC m=+1145.383235801" watchObservedRunningTime="2026-03-13 13:12:51.732949157 +0000 UTC m=+1145.386414316" Mar 13 13:12:51.743289 master-0 kubenswrapper[28149]: I0313 13:12:51.743020 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" event={"ID":"52f4f9dd-4956-4c8b-9a8d-c832a8049c3a","Type":"ContainerStarted","Data":"04eed29e085ea6602997698fbff3a7c4348b4a117cf749a23b0f26c2d6fcf385"} Mar 13 13:12:51.743289 master-0 kubenswrapper[28149]: I0313 13:12:51.743072 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:12:51.753577 master-0 kubenswrapper[28149]: I0313 13:12:51.752589 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-68ee-account-create-update-57sr5" event={"ID":"7bae1fb0-1850-4689-b3ec-018c4a08917f","Type":"ContainerStarted","Data":"d3b26a37bc0ac0ce0b1d474c8eacebc656fa8cfcc3f7d3934abf0844749ed873"} Mar 13 13:12:51.753577 master-0 kubenswrapper[28149]: I0313 13:12:51.752700 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-68ee-account-create-update-57sr5" event={"ID":"7bae1fb0-1850-4689-b3ec-018c4a08917f","Type":"ContainerStarted","Data":"b976cd5d6d5f9f6001bc30751b2d07e763bb12c67f75ba0cd912dda59b7e20b6"} Mar 13 13:12:51.767955 master-0 kubenswrapper[28149]: I0313 13:12:51.766705 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6g5cr" event={"ID":"1f6be9e1-66cf-49d1-8c97-37f09d594892","Type":"ContainerStarted","Data":"b7ca6cb41d4f6671d01b97866a62a7b029c573704c107291d436e2f749dabb5a"} Mar 13 13:12:51.776564 master-0 kubenswrapper[28149]: I0313 13:12:51.776482 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rvfxr" event={"ID":"42738462-ac51-4b8d-a688-c67389ad111b","Type":"ContainerStarted","Data":"49a319c9ac0543d0041817617c3bd7f9526168ab7377a7deb6c55e5e8b60d409"} Mar 13 13:12:51.802999 master-0 kubenswrapper[28149]: I0313 13:12:51.799825 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"648b68a5-c28c-4322-893b-c1ac80172c6f","Type":"ContainerStarted","Data":"c620d3e891a24ca89e6241153d01f093a1971bb4cbfdc49b75559c00e882ecd0"} Mar 13 13:12:51.810298 master-0 kubenswrapper[28149]: I0313 13:12:51.808355 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-924b-account-create-update-nsjdj" event={"ID":"6b553d13-8815-4ada-a5a6-014839b8be7a","Type":"ContainerStarted","Data":"cd02b335ceca5e70fa3c4e1907c28ae86505fb7d3ad310d0f7ee9750ff82d873"} Mar 13 13:12:51.810298 master-0 kubenswrapper[28149]: I0313 13:12:51.808406 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-924b-account-create-update-nsjdj" event={"ID":"6b553d13-8815-4ada-a5a6-014839b8be7a","Type":"ContainerStarted","Data":"171b60c79dde4622f0f4ca011ba724afd00a08fffc82a6cf27d3b38372f771d4"} Mar 13 13:12:51.828158 master-0 kubenswrapper[28149]: I0313 13:12:51.823094 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-9f84-account-create-update-p9z6g" podStartSLOduration=22.823072234 podStartE2EDuration="22.823072234s" podCreationTimestamp="2026-03-13 13:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:12:51.800325212 +0000 UTC m=+1145.453790391" watchObservedRunningTime="2026-03-13 13:12:51.823072234 +0000 UTC m=+1145.476537393" Mar 13 13:12:51.960583 master-0 kubenswrapper[28149]: I0313 13:12:51.960379 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-68ee-account-create-update-57sr5" podStartSLOduration=22.960353582 podStartE2EDuration="22.960353582s" podCreationTimestamp="2026-03-13 13:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:12:51.875166038 +0000 UTC m=+1145.528631207" watchObservedRunningTime="2026-03-13 13:12:51.960353582 +0000 UTC m=+1145.613818731" Mar 13 13:12:53.016182 master-0 kubenswrapper[28149]: I0313 13:12:53.010048 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6g5cr" event={"ID":"1f6be9e1-66cf-49d1-8c97-37f09d594892","Type":"ContainerStarted","Data":"3ea31af70a7a7636a452d336f2c785177975d4f394c5456aee35f8f3dea45408"} Mar 13 13:12:53.051404 master-0 kubenswrapper[28149]: I0313 13:12:53.051245 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-6g5cr" podStartSLOduration=24.051220027 podStartE2EDuration="24.051220027s" podCreationTimestamp="2026-03-13 13:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:12:51.903761798 +0000 UTC m=+1145.557226957" watchObservedRunningTime="2026-03-13 13:12:53.051220027 +0000 UTC m=+1146.704685186" Mar 13 13:12:53.062170 master-0 kubenswrapper[28149]: I0313 13:12:53.059764 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rvfxr" event={"ID":"42738462-ac51-4b8d-a688-c67389ad111b","Type":"ContainerStarted","Data":"0c81cf7be233fd51f225ebedc05a993ab3860963209fc9fb6033b60cf3794928"} Mar 13 13:12:53.168419 master-0 kubenswrapper[28149]: I0313 13:12:53.164570 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=6.505877576 podStartE2EDuration="45.164547778s" podCreationTimestamp="2026-03-13 13:12:08 +0000 UTC" firstStartedPulling="2026-03-13 13:12:11.250896141 +0000 UTC m=+1104.904361300" lastFinishedPulling="2026-03-13 13:12:49.909566343 +0000 UTC m=+1143.563031502" observedRunningTime="2026-03-13 13:12:51.939374837 +0000 UTC m=+1145.592840006" watchObservedRunningTime="2026-03-13 13:12:53.164547778 +0000 UTC m=+1146.818012947" Mar 13 13:12:53.171233 master-0 kubenswrapper[28149]: I0313 13:12:53.171099 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-924b-account-create-update-nsjdj" podStartSLOduration=25.171076255 podStartE2EDuration="25.171076255s" podCreationTimestamp="2026-03-13 13:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:12:51.96954006 +0000 UTC m=+1145.623005219" watchObservedRunningTime="2026-03-13 13:12:53.171076255 +0000 UTC m=+1146.824541404" Mar 13 13:12:53.200189 master-0 kubenswrapper[28149]: I0313 13:12:53.191819 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-8454cbf95d-4wvx9" Mar 13 13:12:53.231162 master-0 kubenswrapper[28149]: I0313 13:12:53.230565 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-rvfxr" podStartSLOduration=25.230537477 podStartE2EDuration="25.230537477s" podCreationTimestamp="2026-03-13 13:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:12:53.12187393 +0000 UTC m=+1146.775339109" watchObservedRunningTime="2026-03-13 13:12:53.230537477 +0000 UTC m=+1146.884002636" Mar 13 13:12:55.089810 master-0 kubenswrapper[28149]: I0313 13:12:55.089759 28149 generic.go:334] "Generic (PLEG): container finished" podID="d44088e4-d747-49f2-a892-7b20ce2adac5" containerID="2bd35dc7b889356fc01e19d31ef94fa15402110e07b8ffac1e9707b348c5f85f" exitCode=0 Mar 13 13:12:55.090447 master-0 kubenswrapper[28149]: I0313 13:12:55.089820 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6vxdg" event={"ID":"d44088e4-d747-49f2-a892-7b20ce2adac5","Type":"ContainerDied","Data":"2bd35dc7b889356fc01e19d31ef94fa15402110e07b8ffac1e9707b348c5f85f"} Mar 13 13:12:56.106028 master-0 kubenswrapper[28149]: I0313 13:12:56.105903 28149 generic.go:334] "Generic (PLEG): container finished" podID="42738462-ac51-4b8d-a688-c67389ad111b" containerID="0c81cf7be233fd51f225ebedc05a993ab3860963209fc9fb6033b60cf3794928" exitCode=0 Mar 13 13:12:56.107567 master-0 kubenswrapper[28149]: I0313 13:12:56.105971 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rvfxr" event={"ID":"42738462-ac51-4b8d-a688-c67389ad111b","Type":"ContainerDied","Data":"0c81cf7be233fd51f225ebedc05a993ab3860963209fc9fb6033b60cf3794928"} Mar 13 13:12:56.110103 master-0 kubenswrapper[28149]: I0313 13:12:56.110046 28149 generic.go:334] "Generic (PLEG): container finished" podID="84d745fd-adcc-4790-b117-fca53bf18ac1" containerID="4550dc72fb75a10d4e5b42a9c188e2632becad983fb357d8decfef903e1e28d3" exitCode=0 Mar 13 13:12:56.110274 master-0 kubenswrapper[28149]: I0313 13:12:56.110153 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9f84-account-create-update-p9z6g" event={"ID":"84d745fd-adcc-4790-b117-fca53bf18ac1","Type":"ContainerDied","Data":"4550dc72fb75a10d4e5b42a9c188e2632becad983fb357d8decfef903e1e28d3"} Mar 13 13:12:56.112804 master-0 kubenswrapper[28149]: I0313 13:12:56.112772 28149 generic.go:334] "Generic (PLEG): container finished" podID="6b553d13-8815-4ada-a5a6-014839b8be7a" containerID="cd02b335ceca5e70fa3c4e1907c28ae86505fb7d3ad310d0f7ee9750ff82d873" exitCode=0 Mar 13 13:12:56.112957 master-0 kubenswrapper[28149]: I0313 13:12:56.112846 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-924b-account-create-update-nsjdj" event={"ID":"6b553d13-8815-4ada-a5a6-014839b8be7a","Type":"ContainerDied","Data":"cd02b335ceca5e70fa3c4e1907c28ae86505fb7d3ad310d0f7ee9750ff82d873"} Mar 13 13:12:56.144167 master-0 kubenswrapper[28149]: I0313 13:12:56.144059 28149 generic.go:334] "Generic (PLEG): container finished" podID="7bae1fb0-1850-4689-b3ec-018c4a08917f" containerID="d3b26a37bc0ac0ce0b1d474c8eacebc656fa8cfcc3f7d3934abf0844749ed873" exitCode=0 Mar 13 13:12:56.144434 master-0 kubenswrapper[28149]: I0313 13:12:56.144207 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-68ee-account-create-update-57sr5" event={"ID":"7bae1fb0-1850-4689-b3ec-018c4a08917f","Type":"ContainerDied","Data":"d3b26a37bc0ac0ce0b1d474c8eacebc656fa8cfcc3f7d3934abf0844749ed873"} Mar 13 13:12:56.147000 master-0 kubenswrapper[28149]: I0313 13:12:56.146952 28149 generic.go:334] "Generic (PLEG): container finished" podID="1f6be9e1-66cf-49d1-8c97-37f09d594892" containerID="3ea31af70a7a7636a452d336f2c785177975d4f394c5456aee35f8f3dea45408" exitCode=0 Mar 13 13:12:56.147233 master-0 kubenswrapper[28149]: I0313 13:12:56.147202 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6g5cr" event={"ID":"1f6be9e1-66cf-49d1-8c97-37f09d594892","Type":"ContainerDied","Data":"3ea31af70a7a7636a452d336f2c785177975d4f394c5456aee35f8f3dea45408"} Mar 13 13:12:57.170074 master-0 kubenswrapper[28149]: I0313 13:12:57.169994 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6vxdg" event={"ID":"d44088e4-d747-49f2-a892-7b20ce2adac5","Type":"ContainerDied","Data":"ae1db571d1fa2be6ce819a04eb3f50a5ff26ff8b7fc5ae146c5f13ad4bce0fba"} Mar 13 13:12:57.170689 master-0 kubenswrapper[28149]: I0313 13:12:57.170090 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae1db571d1fa2be6ce819a04eb3f50a5ff26ff8b7fc5ae146c5f13ad4bce0fba" Mar 13 13:12:57.254977 master-0 kubenswrapper[28149]: I0313 13:12:57.254905 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6vxdg" Mar 13 13:12:57.408900 master-0 kubenswrapper[28149]: I0313 13:12:57.407172 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44088e4-d747-49f2-a892-7b20ce2adac5-operator-scripts\") pod \"d44088e4-d747-49f2-a892-7b20ce2adac5\" (UID: \"d44088e4-d747-49f2-a892-7b20ce2adac5\") " Mar 13 13:12:57.408900 master-0 kubenswrapper[28149]: I0313 13:12:57.407327 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlkr9\" (UniqueName: \"kubernetes.io/projected/d44088e4-d747-49f2-a892-7b20ce2adac5-kube-api-access-zlkr9\") pod \"d44088e4-d747-49f2-a892-7b20ce2adac5\" (UID: \"d44088e4-d747-49f2-a892-7b20ce2adac5\") " Mar 13 13:12:57.409634 master-0 kubenswrapper[28149]: I0313 13:12:57.409606 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d44088e4-d747-49f2-a892-7b20ce2adac5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d44088e4-d747-49f2-a892-7b20ce2adac5" (UID: "d44088e4-d747-49f2-a892-7b20ce2adac5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:12:57.412960 master-0 kubenswrapper[28149]: I0313 13:12:57.412922 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d44088e4-d747-49f2-a892-7b20ce2adac5-kube-api-access-zlkr9" (OuterVolumeSpecName: "kube-api-access-zlkr9") pod "d44088e4-d747-49f2-a892-7b20ce2adac5" (UID: "d44088e4-d747-49f2-a892-7b20ce2adac5"). InnerVolumeSpecName "kube-api-access-zlkr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:12:57.515039 master-0 kubenswrapper[28149]: I0313 13:12:57.513099 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44088e4-d747-49f2-a892-7b20ce2adac5-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:57.515039 master-0 kubenswrapper[28149]: I0313 13:12:57.513149 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlkr9\" (UniqueName: \"kubernetes.io/projected/d44088e4-d747-49f2-a892-7b20ce2adac5-kube-api-access-zlkr9\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:57.764585 master-0 kubenswrapper[28149]: I0313 13:12:57.763999 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rvfxr" Mar 13 13:12:57.939610 master-0 kubenswrapper[28149]: I0313 13:12:57.939478 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgstb\" (UniqueName: \"kubernetes.io/projected/42738462-ac51-4b8d-a688-c67389ad111b-kube-api-access-bgstb\") pod \"42738462-ac51-4b8d-a688-c67389ad111b\" (UID: \"42738462-ac51-4b8d-a688-c67389ad111b\") " Mar 13 13:12:57.939866 master-0 kubenswrapper[28149]: I0313 13:12:57.939705 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42738462-ac51-4b8d-a688-c67389ad111b-operator-scripts\") pod \"42738462-ac51-4b8d-a688-c67389ad111b\" (UID: \"42738462-ac51-4b8d-a688-c67389ad111b\") " Mar 13 13:12:57.944333 master-0 kubenswrapper[28149]: I0313 13:12:57.941033 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42738462-ac51-4b8d-a688-c67389ad111b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42738462-ac51-4b8d-a688-c67389ad111b" (UID: "42738462-ac51-4b8d-a688-c67389ad111b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:12:57.947476 master-0 kubenswrapper[28149]: I0313 13:12:57.947404 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42738462-ac51-4b8d-a688-c67389ad111b-kube-api-access-bgstb" (OuterVolumeSpecName: "kube-api-access-bgstb") pod "42738462-ac51-4b8d-a688-c67389ad111b" (UID: "42738462-ac51-4b8d-a688-c67389ad111b"). InnerVolumeSpecName "kube-api-access-bgstb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:12:58.037579 master-0 kubenswrapper[28149]: I0313 13:12:58.037370 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9f84-account-create-update-p9z6g" Mar 13 13:12:58.042750 master-0 kubenswrapper[28149]: I0313 13:12:58.042698 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgstb\" (UniqueName: \"kubernetes.io/projected/42738462-ac51-4b8d-a688-c67389ad111b-kube-api-access-bgstb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:58.043018 master-0 kubenswrapper[28149]: I0313 13:12:58.043007 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42738462-ac51-4b8d-a688-c67389ad111b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:58.048899 master-0 kubenswrapper[28149]: I0313 13:12:58.048855 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-68ee-account-create-update-57sr5" Mar 13 13:12:58.075331 master-0 kubenswrapper[28149]: I0313 13:12:58.069777 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-924b-account-create-update-nsjdj" Mar 13 13:12:58.175181 master-0 kubenswrapper[28149]: I0313 13:12:58.154748 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbs6l\" (UniqueName: \"kubernetes.io/projected/84d745fd-adcc-4790-b117-fca53bf18ac1-kube-api-access-hbs6l\") pod \"84d745fd-adcc-4790-b117-fca53bf18ac1\" (UID: \"84d745fd-adcc-4790-b117-fca53bf18ac1\") " Mar 13 13:12:58.175181 master-0 kubenswrapper[28149]: I0313 13:12:58.154941 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk7vn\" (UniqueName: \"kubernetes.io/projected/7bae1fb0-1850-4689-b3ec-018c4a08917f-kube-api-access-lk7vn\") pod \"7bae1fb0-1850-4689-b3ec-018c4a08917f\" (UID: \"7bae1fb0-1850-4689-b3ec-018c4a08917f\") " Mar 13 13:12:58.175181 master-0 kubenswrapper[28149]: I0313 13:12:58.155049 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84d745fd-adcc-4790-b117-fca53bf18ac1-operator-scripts\") pod \"84d745fd-adcc-4790-b117-fca53bf18ac1\" (UID: \"84d745fd-adcc-4790-b117-fca53bf18ac1\") " Mar 13 13:12:58.175181 master-0 kubenswrapper[28149]: I0313 13:12:58.155108 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bae1fb0-1850-4689-b3ec-018c4a08917f-operator-scripts\") pod \"7bae1fb0-1850-4689-b3ec-018c4a08917f\" (UID: \"7bae1fb0-1850-4689-b3ec-018c4a08917f\") " Mar 13 13:12:58.175181 master-0 kubenswrapper[28149]: I0313 13:12:58.170308 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bae1fb0-1850-4689-b3ec-018c4a08917f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7bae1fb0-1850-4689-b3ec-018c4a08917f" (UID: "7bae1fb0-1850-4689-b3ec-018c4a08917f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:12:58.175181 master-0 kubenswrapper[28149]: I0313 13:12:58.170899 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84d745fd-adcc-4790-b117-fca53bf18ac1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84d745fd-adcc-4790-b117-fca53bf18ac1" (UID: "84d745fd-adcc-4790-b117-fca53bf18ac1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:12:58.202281 master-0 kubenswrapper[28149]: I0313 13:12:58.201530 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d745fd-adcc-4790-b117-fca53bf18ac1-kube-api-access-hbs6l" (OuterVolumeSpecName: "kube-api-access-hbs6l") pod "84d745fd-adcc-4790-b117-fca53bf18ac1" (UID: "84d745fd-adcc-4790-b117-fca53bf18ac1"). InnerVolumeSpecName "kube-api-access-hbs6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:12:58.202281 master-0 kubenswrapper[28149]: I0313 13:12:58.201619 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bae1fb0-1850-4689-b3ec-018c4a08917f-kube-api-access-lk7vn" (OuterVolumeSpecName: "kube-api-access-lk7vn") pod "7bae1fb0-1850-4689-b3ec-018c4a08917f" (UID: "7bae1fb0-1850-4689-b3ec-018c4a08917f"). InnerVolumeSpecName "kube-api-access-lk7vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:12:58.288193 master-0 kubenswrapper[28149]: I0313 13:12:58.287268 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b553d13-8815-4ada-a5a6-014839b8be7a-operator-scripts\") pod \"6b553d13-8815-4ada-a5a6-014839b8be7a\" (UID: \"6b553d13-8815-4ada-a5a6-014839b8be7a\") " Mar 13 13:12:58.288193 master-0 kubenswrapper[28149]: I0313 13:12:58.287375 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h9vb\" (UniqueName: \"kubernetes.io/projected/6b553d13-8815-4ada-a5a6-014839b8be7a-kube-api-access-2h9vb\") pod \"6b553d13-8815-4ada-a5a6-014839b8be7a\" (UID: \"6b553d13-8815-4ada-a5a6-014839b8be7a\") " Mar 13 13:12:58.288542 master-0 kubenswrapper[28149]: I0313 13:12:58.288235 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84d745fd-adcc-4790-b117-fca53bf18ac1-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:58.288542 master-0 kubenswrapper[28149]: I0313 13:12:58.288258 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bae1fb0-1850-4689-b3ec-018c4a08917f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:58.288542 master-0 kubenswrapper[28149]: I0313 13:12:58.288273 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbs6l\" (UniqueName: \"kubernetes.io/projected/84d745fd-adcc-4790-b117-fca53bf18ac1-kube-api-access-hbs6l\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:58.288542 master-0 kubenswrapper[28149]: I0313 13:12:58.288288 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk7vn\" (UniqueName: \"kubernetes.io/projected/7bae1fb0-1850-4689-b3ec-018c4a08917f-kube-api-access-lk7vn\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:58.321199 master-0 kubenswrapper[28149]: I0313 13:12:58.298234 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9f84-account-create-update-p9z6g" event={"ID":"84d745fd-adcc-4790-b117-fca53bf18ac1","Type":"ContainerDied","Data":"41135fb172404e42743d926d8d3376812d2ee394bee7966ca5dc9c2003ab52f3"} Mar 13 13:12:58.321199 master-0 kubenswrapper[28149]: I0313 13:12:58.298282 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41135fb172404e42743d926d8d3376812d2ee394bee7966ca5dc9c2003ab52f3" Mar 13 13:12:58.321199 master-0 kubenswrapper[28149]: I0313 13:12:58.298355 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9f84-account-create-update-p9z6g" Mar 13 13:12:58.321199 master-0 kubenswrapper[28149]: I0313 13:12:58.304509 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b553d13-8815-4ada-a5a6-014839b8be7a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b553d13-8815-4ada-a5a6-014839b8be7a" (UID: "6b553d13-8815-4ada-a5a6-014839b8be7a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:12:58.371184 master-0 kubenswrapper[28149]: I0313 13:12:58.354541 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b553d13-8815-4ada-a5a6-014839b8be7a-kube-api-access-2h9vb" (OuterVolumeSpecName: "kube-api-access-2h9vb") pod "6b553d13-8815-4ada-a5a6-014839b8be7a" (UID: "6b553d13-8815-4ada-a5a6-014839b8be7a"). InnerVolumeSpecName "kube-api-access-2h9vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:12:58.391283 master-0 kubenswrapper[28149]: I0313 13:12:58.375890 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-924b-account-create-update-nsjdj" event={"ID":"6b553d13-8815-4ada-a5a6-014839b8be7a","Type":"ContainerDied","Data":"171b60c79dde4622f0f4ca011ba724afd00a08fffc82a6cf27d3b38372f771d4"} Mar 13 13:12:58.391283 master-0 kubenswrapper[28149]: I0313 13:12:58.375940 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="171b60c79dde4622f0f4ca011ba724afd00a08fffc82a6cf27d3b38372f771d4" Mar 13 13:12:58.391283 master-0 kubenswrapper[28149]: I0313 13:12:58.376038 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-924b-account-create-update-nsjdj" Mar 13 13:12:58.391957 master-0 kubenswrapper[28149]: I0313 13:12:58.391675 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-68ee-account-create-update-57sr5" event={"ID":"7bae1fb0-1850-4689-b3ec-018c4a08917f","Type":"ContainerDied","Data":"b976cd5d6d5f9f6001bc30751b2d07e763bb12c67f75ba0cd912dda59b7e20b6"} Mar 13 13:12:58.391957 master-0 kubenswrapper[28149]: I0313 13:12:58.391732 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b976cd5d6d5f9f6001bc30751b2d07e763bb12c67f75ba0cd912dda59b7e20b6" Mar 13 13:12:58.393727 master-0 kubenswrapper[28149]: I0313 13:12:58.393621 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b553d13-8815-4ada-a5a6-014839b8be7a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:58.393727 master-0 kubenswrapper[28149]: I0313 13:12:58.393664 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h9vb\" (UniqueName: \"kubernetes.io/projected/6b553d13-8815-4ada-a5a6-014839b8be7a-kube-api-access-2h9vb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:58.394885 master-0 kubenswrapper[28149]: I0313 13:12:58.394736 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6g5cr" event={"ID":"1f6be9e1-66cf-49d1-8c97-37f09d594892","Type":"ContainerDied","Data":"b7ca6cb41d4f6671d01b97866a62a7b029c573704c107291d436e2f749dabb5a"} Mar 13 13:12:58.394885 master-0 kubenswrapper[28149]: I0313 13:12:58.394784 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7ca6cb41d4f6671d01b97866a62a7b029c573704c107291d436e2f749dabb5a" Mar 13 13:12:58.413196 master-0 kubenswrapper[28149]: I0313 13:12:58.396691 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rvfxr" event={"ID":"42738462-ac51-4b8d-a688-c67389ad111b","Type":"ContainerDied","Data":"49a319c9ac0543d0041817617c3bd7f9526168ab7377a7deb6c55e5e8b60d409"} Mar 13 13:12:58.413196 master-0 kubenswrapper[28149]: I0313 13:12:58.396744 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49a319c9ac0543d0041817617c3bd7f9526168ab7377a7deb6c55e5e8b60d409" Mar 13 13:12:58.413196 master-0 kubenswrapper[28149]: I0313 13:12:58.408164 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6vxdg" Mar 13 13:12:58.413196 master-0 kubenswrapper[28149]: I0313 13:12:58.408254 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-68ee-account-create-update-57sr5" Mar 13 13:12:58.413196 master-0 kubenswrapper[28149]: I0313 13:12:58.408316 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rvfxr" Mar 13 13:12:58.455373 master-0 kubenswrapper[28149]: I0313 13:12:58.440539 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6g5cr" Mar 13 13:12:58.625282 master-0 kubenswrapper[28149]: I0313 13:12:58.616826 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f6be9e1-66cf-49d1-8c97-37f09d594892-operator-scripts\") pod \"1f6be9e1-66cf-49d1-8c97-37f09d594892\" (UID: \"1f6be9e1-66cf-49d1-8c97-37f09d594892\") " Mar 13 13:12:58.625282 master-0 kubenswrapper[28149]: I0313 13:12:58.617298 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nwwr\" (UniqueName: \"kubernetes.io/projected/1f6be9e1-66cf-49d1-8c97-37f09d594892-kube-api-access-4nwwr\") pod \"1f6be9e1-66cf-49d1-8c97-37f09d594892\" (UID: \"1f6be9e1-66cf-49d1-8c97-37f09d594892\") " Mar 13 13:12:58.645166 master-0 kubenswrapper[28149]: I0313 13:12:58.641196 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f6be9e1-66cf-49d1-8c97-37f09d594892-kube-api-access-4nwwr" (OuterVolumeSpecName: "kube-api-access-4nwwr") pod "1f6be9e1-66cf-49d1-8c97-37f09d594892" (UID: "1f6be9e1-66cf-49d1-8c97-37f09d594892"). InnerVolumeSpecName "kube-api-access-4nwwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:12:58.645770 master-0 kubenswrapper[28149]: I0313 13:12:58.645698 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f6be9e1-66cf-49d1-8c97-37f09d594892-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f6be9e1-66cf-49d1-8c97-37f09d594892" (UID: "1f6be9e1-66cf-49d1-8c97-37f09d594892"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:12:58.721015 master-0 kubenswrapper[28149]: I0313 13:12:58.720826 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nwwr\" (UniqueName: \"kubernetes.io/projected/1f6be9e1-66cf-49d1-8c97-37f09d594892-kube-api-access-4nwwr\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:58.721015 master-0 kubenswrapper[28149]: I0313 13:12:58.720923 28149 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f6be9e1-66cf-49d1-8c97-37f09d594892-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:12:59.408897 master-0 kubenswrapper[28149]: I0313 13:12:59.408826 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6g5cr" Mar 13 13:12:59.442815 master-0 kubenswrapper[28149]: I0313 13:12:59.442754 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:12:59.443212 master-0 kubenswrapper[28149]: I0313 13:12:59.443088 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-e6fbd-default-external-api-0" podUID="f9e03bf1-b908-4148-8838-f54eaa369e6a" containerName="glance-log" containerID="cri-o://b735b158ae0f1f81c167b2e3ec4bb07208ae9e3e1a523919c59da19d0ac89b38" gracePeriod=30 Mar 13 13:12:59.443793 master-0 kubenswrapper[28149]: I0313 13:12:59.443175 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-e6fbd-default-external-api-0" podUID="f9e03bf1-b908-4148-8838-f54eaa369e6a" containerName="glance-httpd" containerID="cri-o://fa6e60ae3af7814543f74294facd270e11b151ddfa03c5dc99c77d7ed6414b4a" gracePeriod=30 Mar 13 13:13:00.273487 master-0 kubenswrapper[28149]: I0313 13:13:00.273398 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:13:00.273811 master-0 kubenswrapper[28149]: I0313 13:13:00.273684 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-e6fbd-default-internal-api-0" podUID="611eba2b-39d1-43b8-bdce-7b7c5436180c" containerName="glance-log" containerID="cri-o://3605fb008c93b616a59c49c14ce99dd33736be58cdf499b88eb71ef7ba777d9a" gracePeriod=30 Mar 13 13:13:00.274194 master-0 kubenswrapper[28149]: I0313 13:13:00.274069 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-e6fbd-default-internal-api-0" podUID="611eba2b-39d1-43b8-bdce-7b7c5436180c" containerName="glance-httpd" containerID="cri-o://7705e241a1083d4cd9858d6d7c541bec846e153fa8108212316ee24486559c75" gracePeriod=30 Mar 13 13:13:00.424115 master-0 kubenswrapper[28149]: I0313 13:13:00.424061 28149 generic.go:334] "Generic (PLEG): container finished" podID="3181ad18-51bf-4620-b629-5e1a05bab0e0" containerID="0a4a493d6ec6276529bb3825f247a2893901dcf39a3d88cbfca8045cd5b54c37" exitCode=0 Mar 13 13:13:00.424948 master-0 kubenswrapper[28149]: I0313 13:13:00.424206 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-4wfp7" event={"ID":"3181ad18-51bf-4620-b629-5e1a05bab0e0","Type":"ContainerDied","Data":"0a4a493d6ec6276529bb3825f247a2893901dcf39a3d88cbfca8045cd5b54c37"} Mar 13 13:13:00.427640 master-0 kubenswrapper[28149]: I0313 13:13:00.427593 28149 generic.go:334] "Generic (PLEG): container finished" podID="f9e03bf1-b908-4148-8838-f54eaa369e6a" containerID="b735b158ae0f1f81c167b2e3ec4bb07208ae9e3e1a523919c59da19d0ac89b38" exitCode=143 Mar 13 13:13:00.427773 master-0 kubenswrapper[28149]: I0313 13:13:00.427675 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-external-api-0" event={"ID":"f9e03bf1-b908-4148-8838-f54eaa369e6a","Type":"ContainerDied","Data":"b735b158ae0f1f81c167b2e3ec4bb07208ae9e3e1a523919c59da19d0ac89b38"} Mar 13 13:13:00.430739 master-0 kubenswrapper[28149]: I0313 13:13:00.430691 28149 generic.go:334] "Generic (PLEG): container finished" podID="8fdaa161-cf3d-465a-8e70-c2af73f96711" containerID="57af53b41f5bbaa53160b365a458c9b1dda59bdba11fa23d234cc3fdff261db2" exitCode=0 Mar 13 13:13:00.430823 master-0 kubenswrapper[28149]: I0313 13:13:00.430745 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8fdaa161-cf3d-465a-8e70-c2af73f96711","Type":"ContainerDied","Data":"57af53b41f5bbaa53160b365a458c9b1dda59bdba11fa23d234cc3fdff261db2"} Mar 13 13:13:00.434372 master-0 kubenswrapper[28149]: I0313 13:13:00.434332 28149 generic.go:334] "Generic (PLEG): container finished" podID="611eba2b-39d1-43b8-bdce-7b7c5436180c" containerID="3605fb008c93b616a59c49c14ce99dd33736be58cdf499b88eb71ef7ba777d9a" exitCode=143 Mar 13 13:13:00.434460 master-0 kubenswrapper[28149]: I0313 13:13:00.434377 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-internal-api-0" event={"ID":"611eba2b-39d1-43b8-bdce-7b7c5436180c","Type":"ContainerDied","Data":"3605fb008c93b616a59c49c14ce99dd33736be58cdf499b88eb71ef7ba777d9a"} Mar 13 13:13:01.305271 master-0 kubenswrapper[28149]: I0313 13:13:01.305206 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xbkk4"] Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: E0313 13:13:01.305823 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44088e4-d747-49f2-a892-7b20ce2adac5" containerName="mariadb-database-create" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: I0313 13:13:01.305869 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44088e4-d747-49f2-a892-7b20ce2adac5" containerName="mariadb-database-create" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: E0313 13:13:01.305914 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42738462-ac51-4b8d-a688-c67389ad111b" containerName="mariadb-database-create" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: I0313 13:13:01.305921 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="42738462-ac51-4b8d-a688-c67389ad111b" containerName="mariadb-database-create" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: E0313 13:13:01.305942 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bae1fb0-1850-4689-b3ec-018c4a08917f" containerName="mariadb-account-create-update" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: I0313 13:13:01.305948 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bae1fb0-1850-4689-b3ec-018c4a08917f" containerName="mariadb-account-create-update" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: E0313 13:13:01.305968 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6be9e1-66cf-49d1-8c97-37f09d594892" containerName="mariadb-database-create" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: I0313 13:13:01.305975 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6be9e1-66cf-49d1-8c97-37f09d594892" containerName="mariadb-database-create" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: E0313 13:13:01.305982 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b553d13-8815-4ada-a5a6-014839b8be7a" containerName="mariadb-account-create-update" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: I0313 13:13:01.305987 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b553d13-8815-4ada-a5a6-014839b8be7a" containerName="mariadb-account-create-update" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: E0313 13:13:01.306012 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d745fd-adcc-4790-b117-fca53bf18ac1" containerName="mariadb-account-create-update" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: I0313 13:13:01.306019 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d745fd-adcc-4790-b117-fca53bf18ac1" containerName="mariadb-account-create-update" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: I0313 13:13:01.306321 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="d44088e4-d747-49f2-a892-7b20ce2adac5" containerName="mariadb-database-create" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: I0313 13:13:01.306338 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bae1fb0-1850-4689-b3ec-018c4a08917f" containerName="mariadb-account-create-update" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: I0313 13:13:01.306351 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d745fd-adcc-4790-b117-fca53bf18ac1" containerName="mariadb-account-create-update" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: I0313 13:13:01.306365 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="42738462-ac51-4b8d-a688-c67389ad111b" containerName="mariadb-database-create" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: I0313 13:13:01.306374 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f6be9e1-66cf-49d1-8c97-37f09d594892" containerName="mariadb-database-create" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: I0313 13:13:01.306409 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b553d13-8815-4ada-a5a6-014839b8be7a" containerName="mariadb-account-create-update" Mar 13 13:13:01.307989 master-0 kubenswrapper[28149]: I0313 13:13:01.307188 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:01.310541 master-0 kubenswrapper[28149]: I0313 13:13:01.310493 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 13 13:13:01.310848 master-0 kubenswrapper[28149]: I0313 13:13:01.310826 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 13 13:13:01.341164 master-0 kubenswrapper[28149]: I0313 13:13:01.341092 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xbkk4"] Mar 13 13:13:01.390235 master-0 kubenswrapper[28149]: I0313 13:13:01.389947 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-scripts\") pod \"nova-cell0-conductor-db-sync-xbkk4\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:01.390595 master-0 kubenswrapper[28149]: I0313 13:13:01.390444 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-config-data\") pod \"nova-cell0-conductor-db-sync-xbkk4\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:01.390908 master-0 kubenswrapper[28149]: I0313 13:13:01.390854 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5b7z\" (UniqueName: \"kubernetes.io/projected/05ef3745-1126-43ed-bc8b-f7be6477ff30-kube-api-access-b5b7z\") pod \"nova-cell0-conductor-db-sync-xbkk4\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:01.391420 master-0 kubenswrapper[28149]: I0313 13:13:01.391254 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xbkk4\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:01.493864 master-0 kubenswrapper[28149]: I0313 13:13:01.493802 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-scripts\") pod \"nova-cell0-conductor-db-sync-xbkk4\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:01.494478 master-0 kubenswrapper[28149]: I0313 13:13:01.493913 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-config-data\") pod \"nova-cell0-conductor-db-sync-xbkk4\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:01.494478 master-0 kubenswrapper[28149]: I0313 13:13:01.493946 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5b7z\" (UniqueName: \"kubernetes.io/projected/05ef3745-1126-43ed-bc8b-f7be6477ff30-kube-api-access-b5b7z\") pod \"nova-cell0-conductor-db-sync-xbkk4\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:01.494478 master-0 kubenswrapper[28149]: I0313 13:13:01.494051 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xbkk4\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:01.498434 master-0 kubenswrapper[28149]: I0313 13:13:01.498385 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-scripts\") pod \"nova-cell0-conductor-db-sync-xbkk4\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:01.498925 master-0 kubenswrapper[28149]: I0313 13:13:01.498888 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xbkk4\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:01.500764 master-0 kubenswrapper[28149]: I0313 13:13:01.500728 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-config-data\") pod \"nova-cell0-conductor-db-sync-xbkk4\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:01.535675 master-0 kubenswrapper[28149]: I0313 13:13:01.535232 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5b7z\" (UniqueName: \"kubernetes.io/projected/05ef3745-1126-43ed-bc8b-f7be6477ff30-kube-api-access-b5b7z\") pod \"nova-cell0-conductor-db-sync-xbkk4\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:01.627048 master-0 kubenswrapper[28149]: I0313 13:13:01.626678 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:01.885444 master-0 kubenswrapper[28149]: I0313 13:13:01.885158 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:13:02.037728 master-0 kubenswrapper[28149]: I0313 13:13:02.037659 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-config\") pod \"3181ad18-51bf-4620-b629-5e1a05bab0e0\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " Mar 13 13:13:02.037985 master-0 kubenswrapper[28149]: I0313 13:13:02.037807 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-scripts\") pod \"3181ad18-51bf-4620-b629-5e1a05bab0e0\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " Mar 13 13:13:02.037985 master-0 kubenswrapper[28149]: I0313 13:13:02.037843 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-combined-ca-bundle\") pod \"3181ad18-51bf-4620-b629-5e1a05bab0e0\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " Mar 13 13:13:02.038095 master-0 kubenswrapper[28149]: I0313 13:13:02.038065 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrq9m\" (UniqueName: \"kubernetes.io/projected/3181ad18-51bf-4620-b629-5e1a05bab0e0-kube-api-access-nrq9m\") pod \"3181ad18-51bf-4620-b629-5e1a05bab0e0\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " Mar 13 13:13:02.038189 master-0 kubenswrapper[28149]: I0313 13:13:02.038156 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3181ad18-51bf-4620-b629-5e1a05bab0e0-etc-podinfo\") pod \"3181ad18-51bf-4620-b629-5e1a05bab0e0\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " Mar 13 13:13:02.038257 master-0 kubenswrapper[28149]: I0313 13:13:02.038191 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3181ad18-51bf-4620-b629-5e1a05bab0e0-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"3181ad18-51bf-4620-b629-5e1a05bab0e0\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " Mar 13 13:13:02.038309 master-0 kubenswrapper[28149]: I0313 13:13:02.038259 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3181ad18-51bf-4620-b629-5e1a05bab0e0-var-lib-ironic\") pod \"3181ad18-51bf-4620-b629-5e1a05bab0e0\" (UID: \"3181ad18-51bf-4620-b629-5e1a05bab0e0\") " Mar 13 13:13:02.039278 master-0 kubenswrapper[28149]: I0313 13:13:02.039246 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3181ad18-51bf-4620-b629-5e1a05bab0e0-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "3181ad18-51bf-4620-b629-5e1a05bab0e0" (UID: "3181ad18-51bf-4620-b629-5e1a05bab0e0"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:13:02.039619 master-0 kubenswrapper[28149]: I0313 13:13:02.039542 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3181ad18-51bf-4620-b629-5e1a05bab0e0-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "3181ad18-51bf-4620-b629-5e1a05bab0e0" (UID: "3181ad18-51bf-4620-b629-5e1a05bab0e0"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:13:02.045968 master-0 kubenswrapper[28149]: I0313 13:13:02.045465 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3181ad18-51bf-4620-b629-5e1a05bab0e0-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "3181ad18-51bf-4620-b629-5e1a05bab0e0" (UID: "3181ad18-51bf-4620-b629-5e1a05bab0e0"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 13 13:13:02.046193 master-0 kubenswrapper[28149]: I0313 13:13:02.046043 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3181ad18-51bf-4620-b629-5e1a05bab0e0-kube-api-access-nrq9m" (OuterVolumeSpecName: "kube-api-access-nrq9m") pod "3181ad18-51bf-4620-b629-5e1a05bab0e0" (UID: "3181ad18-51bf-4620-b629-5e1a05bab0e0"). InnerVolumeSpecName "kube-api-access-nrq9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:13:02.046193 master-0 kubenswrapper[28149]: I0313 13:13:02.046185 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-scripts" (OuterVolumeSpecName: "scripts") pod "3181ad18-51bf-4620-b629-5e1a05bab0e0" (UID: "3181ad18-51bf-4620-b629-5e1a05bab0e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:02.077638 master-0 kubenswrapper[28149]: I0313 13:13:02.077556 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-config" (OuterVolumeSpecName: "config") pod "3181ad18-51bf-4620-b629-5e1a05bab0e0" (UID: "3181ad18-51bf-4620-b629-5e1a05bab0e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:02.091083 master-0 kubenswrapper[28149]: I0313 13:13:02.090950 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3181ad18-51bf-4620-b629-5e1a05bab0e0" (UID: "3181ad18-51bf-4620-b629-5e1a05bab0e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:02.141857 master-0 kubenswrapper[28149]: I0313 13:13:02.141726 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrq9m\" (UniqueName: \"kubernetes.io/projected/3181ad18-51bf-4620-b629-5e1a05bab0e0-kube-api-access-nrq9m\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:02.141857 master-0 kubenswrapper[28149]: I0313 13:13:02.141782 28149 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3181ad18-51bf-4620-b629-5e1a05bab0e0-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:02.141857 master-0 kubenswrapper[28149]: I0313 13:13:02.141801 28149 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3181ad18-51bf-4620-b629-5e1a05bab0e0-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:02.141857 master-0 kubenswrapper[28149]: I0313 13:13:02.141820 28149 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3181ad18-51bf-4620-b629-5e1a05bab0e0-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:02.141857 master-0 kubenswrapper[28149]: I0313 13:13:02.141834 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:02.141857 master-0 kubenswrapper[28149]: I0313 13:13:02.141845 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:02.141857 master-0 kubenswrapper[28149]: I0313 13:13:02.141857 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3181ad18-51bf-4620-b629-5e1a05bab0e0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:02.249369 master-0 kubenswrapper[28149]: I0313 13:13:02.249299 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xbkk4"] Mar 13 13:13:02.253159 master-0 kubenswrapper[28149]: W0313 13:13:02.249765 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05ef3745_1126_43ed_bc8b_f7be6477ff30.slice/crio-3dbab30fa7253803bf3dff1411b0a21215f18ab494c947b858e33021243846cf WatchSource:0}: Error finding container 3dbab30fa7253803bf3dff1411b0a21215f18ab494c947b858e33021243846cf: Status 404 returned error can't find the container with id 3dbab30fa7253803bf3dff1411b0a21215f18ab494c947b858e33021243846cf Mar 13 13:13:02.465416 master-0 kubenswrapper[28149]: I0313 13:13:02.465262 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xbkk4" event={"ID":"05ef3745-1126-43ed-bc8b-f7be6477ff30","Type":"ContainerStarted","Data":"3dbab30fa7253803bf3dff1411b0a21215f18ab494c947b858e33021243846cf"} Mar 13 13:13:02.468620 master-0 kubenswrapper[28149]: I0313 13:13:02.468573 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-4wfp7" event={"ID":"3181ad18-51bf-4620-b629-5e1a05bab0e0","Type":"ContainerDied","Data":"4c960a46d2ead0ef3d2b85e7f7558f0f84357c6aa4fa6cef75391e4c8ca247e1"} Mar 13 13:13:02.468721 master-0 kubenswrapper[28149]: I0313 13:13:02.468630 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c960a46d2ead0ef3d2b85e7f7558f0f84357c6aa4fa6cef75391e4c8ca247e1" Mar 13 13:13:02.468721 master-0 kubenswrapper[28149]: I0313 13:13:02.468640 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-4wfp7" Mar 13 13:13:02.862103 master-0 kubenswrapper[28149]: I0313 13:13:02.862034 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-e6fbd-default-external-api-0" podUID="f9e03bf1-b908-4148-8838-f54eaa369e6a" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.128.0.218:9292/healthcheck\": read tcp 10.128.0.2:57604->10.128.0.218:9292: read: connection reset by peer" Mar 13 13:13:02.862750 master-0 kubenswrapper[28149]: I0313 13:13:02.862290 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-e6fbd-default-external-api-0" podUID="f9e03bf1-b908-4148-8838-f54eaa369e6a" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.218:9292/healthcheck\": read tcp 10.128.0.2:57590->10.128.0.218:9292: read: connection reset by peer" Mar 13 13:13:03.455881 master-0 kubenswrapper[28149]: I0313 13:13:03.455681 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-e6fbd-default-internal-api-0" podUID="611eba2b-39d1-43b8-bdce-7b7c5436180c" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.217:9292/healthcheck\": read tcp 10.128.0.2:51100->10.128.0.217:9292: read: connection reset by peer" Mar 13 13:13:03.456393 master-0 kubenswrapper[28149]: I0313 13:13:03.456333 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-e6fbd-default-internal-api-0" podUID="611eba2b-39d1-43b8-bdce-7b7c5436180c" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.128.0.217:9292/healthcheck\": read tcp 10.128.0.2:51084->10.128.0.217:9292: read: connection reset by peer" Mar 13 13:13:03.945581 master-0 kubenswrapper[28149]: I0313 13:13:03.945516 28149 generic.go:334] "Generic (PLEG): container finished" podID="f9e03bf1-b908-4148-8838-f54eaa369e6a" containerID="fa6e60ae3af7814543f74294facd270e11b151ddfa03c5dc99c77d7ed6414b4a" exitCode=0 Mar 13 13:13:03.945581 master-0 kubenswrapper[28149]: I0313 13:13:03.945581 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-external-api-0" event={"ID":"f9e03bf1-b908-4148-8838-f54eaa369e6a","Type":"ContainerDied","Data":"fa6e60ae3af7814543f74294facd270e11b151ddfa03c5dc99c77d7ed6414b4a"} Mar 13 13:13:04.104162 master-0 kubenswrapper[28149]: I0313 13:13:04.101509 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bfb994cb5-frl54"] Mar 13 13:13:04.104162 master-0 kubenswrapper[28149]: E0313 13:13:04.102125 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3181ad18-51bf-4620-b629-5e1a05bab0e0" containerName="ironic-inspector-db-sync" Mar 13 13:13:04.104162 master-0 kubenswrapper[28149]: I0313 13:13:04.102153 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="3181ad18-51bf-4620-b629-5e1a05bab0e0" containerName="ironic-inspector-db-sync" Mar 13 13:13:04.107854 master-0 kubenswrapper[28149]: I0313 13:13:04.107801 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="3181ad18-51bf-4620-b629-5e1a05bab0e0" containerName="ironic-inspector-db-sync" Mar 13 13:13:04.113174 master-0 kubenswrapper[28149]: I0313 13:13:04.111151 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.147550 master-0 kubenswrapper[28149]: I0313 13:13:04.147481 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxb6r\" (UniqueName: \"kubernetes.io/projected/a41644d0-10a5-4e06-8da5-15690e85b5a3-kube-api-access-vxb6r\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.147808 master-0 kubenswrapper[28149]: I0313 13:13:04.147594 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-dns-svc\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.147808 master-0 kubenswrapper[28149]: I0313 13:13:04.147679 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-ovsdbserver-sb\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.147897 master-0 kubenswrapper[28149]: I0313 13:13:04.147821 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-dns-swift-storage-0\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.148167 master-0 kubenswrapper[28149]: I0313 13:13:04.148087 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-ovsdbserver-nb\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.148220 master-0 kubenswrapper[28149]: I0313 13:13:04.148200 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-config\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.149993 master-0 kubenswrapper[28149]: I0313 13:13:04.149951 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bfb994cb5-frl54"] Mar 13 13:13:04.275768 master-0 kubenswrapper[28149]: I0313 13:13:04.275710 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-dns-svc\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.276022 master-0 kubenswrapper[28149]: I0313 13:13:04.275857 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-ovsdbserver-sb\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.276063 master-0 kubenswrapper[28149]: I0313 13:13:04.276044 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-dns-swift-storage-0\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.276150 master-0 kubenswrapper[28149]: I0313 13:13:04.276109 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-ovsdbserver-nb\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.276229 master-0 kubenswrapper[28149]: I0313 13:13:04.276174 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-config\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.281149 master-0 kubenswrapper[28149]: I0313 13:13:04.276277 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxb6r\" (UniqueName: \"kubernetes.io/projected/a41644d0-10a5-4e06-8da5-15690e85b5a3-kube-api-access-vxb6r\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.281149 master-0 kubenswrapper[28149]: I0313 13:13:04.278990 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-ovsdbserver-sb\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.281149 master-0 kubenswrapper[28149]: I0313 13:13:04.279027 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-ovsdbserver-nb\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.285338 master-0 kubenswrapper[28149]: I0313 13:13:04.285298 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-dns-svc\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.286952 master-0 kubenswrapper[28149]: I0313 13:13:04.286905 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-dns-swift-storage-0\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.294304 master-0 kubenswrapper[28149]: I0313 13:13:04.288746 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-config\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.331804 master-0 kubenswrapper[28149]: I0313 13:13:04.328120 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxb6r\" (UniqueName: \"kubernetes.io/projected/a41644d0-10a5-4e06-8da5-15690e85b5a3-kube-api-access-vxb6r\") pod \"dnsmasq-dns-bfb994cb5-frl54\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.385718 master-0 kubenswrapper[28149]: I0313 13:13:04.385624 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 13:13:04.455077 master-0 kubenswrapper[28149]: I0313 13:13:04.455021 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 13 13:13:04.459959 master-0 kubenswrapper[28149]: I0313 13:13:04.459922 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 13 13:13:04.460291 master-0 kubenswrapper[28149]: I0313 13:13:04.460274 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 13 13:13:04.460412 master-0 kubenswrapper[28149]: I0313 13:13:04.460395 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 13 13:13:04.476038 master-0 kubenswrapper[28149]: I0313 13:13:04.475909 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 13:13:04.540678 master-0 kubenswrapper[28149]: I0313 13:13:04.540617 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:04.593468 master-0 kubenswrapper[28149]: I0313 13:13:04.593402 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.593468 master-0 kubenswrapper[28149]: I0313 13:13:04.593461 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-config\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.593779 master-0 kubenswrapper[28149]: I0313 13:13:04.593547 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/27f4e8a7-5136-4fbe-a689-4f2071d2480d-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.593779 master-0 kubenswrapper[28149]: I0313 13:13:04.593573 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/27f4e8a7-5136-4fbe-a689-4f2071d2480d-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.593779 master-0 kubenswrapper[28149]: I0313 13:13:04.593601 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65n62\" (UniqueName: \"kubernetes.io/projected/27f4e8a7-5136-4fbe-a689-4f2071d2480d-kube-api-access-65n62\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.593779 master-0 kubenswrapper[28149]: I0313 13:13:04.593711 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/27f4e8a7-5136-4fbe-a689-4f2071d2480d-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.594527 master-0 kubenswrapper[28149]: I0313 13:13:04.594419 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-scripts\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.700713 master-0 kubenswrapper[28149]: I0313 13:13:04.698584 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.700713 master-0 kubenswrapper[28149]: I0313 13:13:04.698646 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-config\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.700713 master-0 kubenswrapper[28149]: I0313 13:13:04.698726 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/27f4e8a7-5136-4fbe-a689-4f2071d2480d-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.700713 master-0 kubenswrapper[28149]: I0313 13:13:04.698805 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/27f4e8a7-5136-4fbe-a689-4f2071d2480d-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.700713 master-0 kubenswrapper[28149]: I0313 13:13:04.698829 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65n62\" (UniqueName: \"kubernetes.io/projected/27f4e8a7-5136-4fbe-a689-4f2071d2480d-kube-api-access-65n62\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.700713 master-0 kubenswrapper[28149]: I0313 13:13:04.699266 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/27f4e8a7-5136-4fbe-a689-4f2071d2480d-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.700713 master-0 kubenswrapper[28149]: I0313 13:13:04.699639 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-scripts\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.702073 master-0 kubenswrapper[28149]: I0313 13:13:04.702042 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/27f4e8a7-5136-4fbe-a689-4f2071d2480d-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.708042 master-0 kubenswrapper[28149]: I0313 13:13:04.704862 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-scripts\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.708042 master-0 kubenswrapper[28149]: I0313 13:13:04.706950 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/27f4e8a7-5136-4fbe-a689-4f2071d2480d-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.709131 master-0 kubenswrapper[28149]: I0313 13:13:04.708680 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.713374 master-0 kubenswrapper[28149]: I0313 13:13:04.713343 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-config\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.713852 master-0 kubenswrapper[28149]: I0313 13:13:04.713833 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/27f4e8a7-5136-4fbe-a689-4f2071d2480d-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.741436 master-0 kubenswrapper[28149]: I0313 13:13:04.736558 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65n62\" (UniqueName: \"kubernetes.io/projected/27f4e8a7-5136-4fbe-a689-4f2071d2480d-kube-api-access-65n62\") pod \"ironic-inspector-0\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:04.835174 master-0 kubenswrapper[28149]: I0313 13:13:04.834468 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 13 13:13:09.526164 master-0 kubenswrapper[28149]: I0313 13:13:09.525325 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 13:13:10.604505 master-0 kubenswrapper[28149]: I0313 13:13:10.604422 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:10.976587 master-0 kubenswrapper[28149]: I0313 13:13:10.974083 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9e03bf1-b908-4148-8838-f54eaa369e6a-httpd-run\") pod \"f9e03bf1-b908-4148-8838-f54eaa369e6a\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " Mar 13 13:13:10.976587 master-0 kubenswrapper[28149]: I0313 13:13:10.974306 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-config-data\") pod \"f9e03bf1-b908-4148-8838-f54eaa369e6a\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " Mar 13 13:13:10.976587 master-0 kubenswrapper[28149]: I0313 13:13:10.974381 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-combined-ca-bundle\") pod \"f9e03bf1-b908-4148-8838-f54eaa369e6a\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " Mar 13 13:13:10.976587 master-0 kubenswrapper[28149]: I0313 13:13:10.974476 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb9ds\" (UniqueName: \"kubernetes.io/projected/f9e03bf1-b908-4148-8838-f54eaa369e6a-kube-api-access-tb9ds\") pod \"f9e03bf1-b908-4148-8838-f54eaa369e6a\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " Mar 13 13:13:10.976587 master-0 kubenswrapper[28149]: I0313 13:13:10.974540 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-public-tls-certs\") pod \"f9e03bf1-b908-4148-8838-f54eaa369e6a\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " Mar 13 13:13:10.976587 master-0 kubenswrapper[28149]: I0313 13:13:10.974582 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-scripts\") pod \"f9e03bf1-b908-4148-8838-f54eaa369e6a\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " Mar 13 13:13:10.976587 master-0 kubenswrapper[28149]: I0313 13:13:10.975113 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"f9e03bf1-b908-4148-8838-f54eaa369e6a\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " Mar 13 13:13:10.976587 master-0 kubenswrapper[28149]: I0313 13:13:10.975208 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9e03bf1-b908-4148-8838-f54eaa369e6a-logs\") pod \"f9e03bf1-b908-4148-8838-f54eaa369e6a\" (UID: \"f9e03bf1-b908-4148-8838-f54eaa369e6a\") " Mar 13 13:13:10.977382 master-0 kubenswrapper[28149]: I0313 13:13:10.977030 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9e03bf1-b908-4148-8838-f54eaa369e6a-logs" (OuterVolumeSpecName: "logs") pod "f9e03bf1-b908-4148-8838-f54eaa369e6a" (UID: "f9e03bf1-b908-4148-8838-f54eaa369e6a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:13:10.977423 master-0 kubenswrapper[28149]: I0313 13:13:10.977403 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9e03bf1-b908-4148-8838-f54eaa369e6a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f9e03bf1-b908-4148-8838-f54eaa369e6a" (UID: "f9e03bf1-b908-4148-8838-f54eaa369e6a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:13:10.992725 master-0 kubenswrapper[28149]: I0313 13:13:10.992096 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9e03bf1-b908-4148-8838-f54eaa369e6a-kube-api-access-tb9ds" (OuterVolumeSpecName: "kube-api-access-tb9ds") pod "f9e03bf1-b908-4148-8838-f54eaa369e6a" (UID: "f9e03bf1-b908-4148-8838-f54eaa369e6a"). InnerVolumeSpecName "kube-api-access-tb9ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:13:11.014406 master-0 kubenswrapper[28149]: I0313 13:13:11.012282 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-scripts" (OuterVolumeSpecName: "scripts") pod "f9e03bf1-b908-4148-8838-f54eaa369e6a" (UID: "f9e03bf1-b908-4148-8838-f54eaa369e6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:11.032809 master-0 kubenswrapper[28149]: I0313 13:13:11.032752 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9e03bf1-b908-4148-8838-f54eaa369e6a" (UID: "f9e03bf1-b908-4148-8838-f54eaa369e6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:11.061239 master-0 kubenswrapper[28149]: I0313 13:13:11.061172 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-config-data" (OuterVolumeSpecName: "config-data") pod "f9e03bf1-b908-4148-8838-f54eaa369e6a" (UID: "f9e03bf1-b908-4148-8838-f54eaa369e6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:11.066715 master-0 kubenswrapper[28149]: I0313 13:13:11.066666 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f9e03bf1-b908-4148-8838-f54eaa369e6a" (UID: "f9e03bf1-b908-4148-8838-f54eaa369e6a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:11.096324 master-0 kubenswrapper[28149]: I0313 13:13:11.096260 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798" (OuterVolumeSpecName: "glance") pod "f9e03bf1-b908-4148-8838-f54eaa369e6a" (UID: "f9e03bf1-b908-4148-8838-f54eaa369e6a"). InnerVolumeSpecName "pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 13:13:11.097000 master-0 kubenswrapper[28149]: I0313 13:13:11.096963 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:11.097000 master-0 kubenswrapper[28149]: I0313 13:13:11.096991 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:11.097167 master-0 kubenswrapper[28149]: I0313 13:13:11.097059 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb9ds\" (UniqueName: \"kubernetes.io/projected/f9e03bf1-b908-4148-8838-f54eaa369e6a-kube-api-access-tb9ds\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:11.097167 master-0 kubenswrapper[28149]: I0313 13:13:11.097077 28149 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:11.097167 master-0 kubenswrapper[28149]: I0313 13:13:11.097089 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9e03bf1-b908-4148-8838-f54eaa369e6a-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:11.097167 master-0 kubenswrapper[28149]: I0313 13:13:11.097126 28149 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") on node \"master-0\" " Mar 13 13:13:11.097167 master-0 kubenswrapper[28149]: I0313 13:13:11.097157 28149 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9e03bf1-b908-4148-8838-f54eaa369e6a-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:11.097167 master-0 kubenswrapper[28149]: I0313 13:13:11.097173 28149 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9e03bf1-b908-4148-8838-f54eaa369e6a-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:11.367741 master-0 kubenswrapper[28149]: I0313 13:13:11.362123 28149 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 13 13:13:11.367741 master-0 kubenswrapper[28149]: I0313 13:13:11.362510 28149 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645" (UniqueName: "kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798") on node "master-0" Mar 13 13:13:11.387415 master-0 kubenswrapper[28149]: I0313 13:13:11.387051 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-external-api-0" event={"ID":"f9e03bf1-b908-4148-8838-f54eaa369e6a","Type":"ContainerDied","Data":"5c459f7a4847b0570e04c3439fbf5e0a9b56ca2eb0a245e11864fd5eb5f3425b"} Mar 13 13:13:11.387415 master-0 kubenswrapper[28149]: I0313 13:13:11.387116 28149 scope.go:117] "RemoveContainer" containerID="fa6e60ae3af7814543f74294facd270e11b151ddfa03c5dc99c77d7ed6414b4a" Mar 13 13:13:11.387415 master-0 kubenswrapper[28149]: I0313 13:13:11.387293 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.403579 master-0 kubenswrapper[28149]: I0313 13:13:11.403511 28149 generic.go:334] "Generic (PLEG): container finished" podID="611eba2b-39d1-43b8-bdce-7b7c5436180c" containerID="7705e241a1083d4cd9858d6d7c541bec846e153fa8108212316ee24486559c75" exitCode=0 Mar 13 13:13:11.403579 master-0 kubenswrapper[28149]: I0313 13:13:11.403569 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-internal-api-0" event={"ID":"611eba2b-39d1-43b8-bdce-7b7c5436180c","Type":"ContainerDied","Data":"7705e241a1083d4cd9858d6d7c541bec846e153fa8108212316ee24486559c75"} Mar 13 13:13:11.454434 master-0 kubenswrapper[28149]: I0313 13:13:11.453343 28149 reconciler_common.go:293] "Volume detached for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:11.487768 master-0 kubenswrapper[28149]: I0313 13:13:11.484053 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:13:11.515608 master-0 kubenswrapper[28149]: I0313 13:13:11.515523 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:13:11.616034 master-0 kubenswrapper[28149]: I0313 13:13:11.613687 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:13:11.620917 master-0 kubenswrapper[28149]: E0313 13:13:11.620861 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e03bf1-b908-4148-8838-f54eaa369e6a" containerName="glance-log" Mar 13 13:13:11.620917 master-0 kubenswrapper[28149]: I0313 13:13:11.620902 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e03bf1-b908-4148-8838-f54eaa369e6a" containerName="glance-log" Mar 13 13:13:11.620917 master-0 kubenswrapper[28149]: E0313 13:13:11.620930 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e03bf1-b908-4148-8838-f54eaa369e6a" containerName="glance-httpd" Mar 13 13:13:11.621294 master-0 kubenswrapper[28149]: I0313 13:13:11.620936 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e03bf1-b908-4148-8838-f54eaa369e6a" containerName="glance-httpd" Mar 13 13:13:11.621619 master-0 kubenswrapper[28149]: I0313 13:13:11.621560 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9e03bf1-b908-4148-8838-f54eaa369e6a" containerName="glance-log" Mar 13 13:13:11.621708 master-0 kubenswrapper[28149]: I0313 13:13:11.621643 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9e03bf1-b908-4148-8838-f54eaa369e6a" containerName="glance-httpd" Mar 13 13:13:11.624508 master-0 kubenswrapper[28149]: I0313 13:13:11.624047 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.638162 master-0 kubenswrapper[28149]: I0313 13:13:11.627579 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 13 13:13:11.638162 master-0 kubenswrapper[28149]: I0313 13:13:11.628708 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:13:11.638162 master-0 kubenswrapper[28149]: I0313 13:13:11.630885 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-e6fbd-default-external-config-data" Mar 13 13:13:11.667876 master-0 kubenswrapper[28149]: I0313 13:13:11.666496 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/100ea86f-4ced-4514-b314-8462958adf98-logs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.667876 master-0 kubenswrapper[28149]: I0313 13:13:11.666594 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/100ea86f-4ced-4514-b314-8462958adf98-httpd-run\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.667876 master-0 kubenswrapper[28149]: I0313 13:13:11.666622 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/100ea86f-4ced-4514-b314-8462958adf98-public-tls-certs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.667876 master-0 kubenswrapper[28149]: I0313 13:13:11.666676 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100ea86f-4ced-4514-b314-8462958adf98-config-data\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.667876 master-0 kubenswrapper[28149]: I0313 13:13:11.666926 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.667876 master-0 kubenswrapper[28149]: I0313 13:13:11.667030 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6mch\" (UniqueName: \"kubernetes.io/projected/100ea86f-4ced-4514-b314-8462958adf98-kube-api-access-t6mch\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.667876 master-0 kubenswrapper[28149]: I0313 13:13:11.667069 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100ea86f-4ced-4514-b314-8462958adf98-scripts\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.667876 master-0 kubenswrapper[28149]: I0313 13:13:11.667155 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100ea86f-4ced-4514-b314-8462958adf98-combined-ca-bundle\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.770817 master-0 kubenswrapper[28149]: I0313 13:13:11.770742 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6mch\" (UniqueName: \"kubernetes.io/projected/100ea86f-4ced-4514-b314-8462958adf98-kube-api-access-t6mch\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.771047 master-0 kubenswrapper[28149]: I0313 13:13:11.770850 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100ea86f-4ced-4514-b314-8462958adf98-scripts\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.772367 master-0 kubenswrapper[28149]: I0313 13:13:11.771368 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100ea86f-4ced-4514-b314-8462958adf98-combined-ca-bundle\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.772367 master-0 kubenswrapper[28149]: I0313 13:13:11.771750 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/100ea86f-4ced-4514-b314-8462958adf98-logs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.772367 master-0 kubenswrapper[28149]: I0313 13:13:11.771882 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/100ea86f-4ced-4514-b314-8462958adf98-httpd-run\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.772367 master-0 kubenswrapper[28149]: I0313 13:13:11.771901 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/100ea86f-4ced-4514-b314-8462958adf98-public-tls-certs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.772367 master-0 kubenswrapper[28149]: I0313 13:13:11.771973 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100ea86f-4ced-4514-b314-8462958adf98-config-data\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.772579 master-0 kubenswrapper[28149]: I0313 13:13:11.772398 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/100ea86f-4ced-4514-b314-8462958adf98-logs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.774226 master-0 kubenswrapper[28149]: I0313 13:13:11.772948 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/100ea86f-4ced-4514-b314-8462958adf98-httpd-run\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.774226 master-0 kubenswrapper[28149]: I0313 13:13:11.773391 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.776303 master-0 kubenswrapper[28149]: I0313 13:13:11.775084 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100ea86f-4ced-4514-b314-8462958adf98-combined-ca-bundle\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.778310 master-0 kubenswrapper[28149]: I0313 13:13:11.778272 28149 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 13:13:11.778466 master-0 kubenswrapper[28149]: I0313 13:13:11.778325 28149 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/dd1664ebbf7aebe13570b4d7d33b7a2c8fb2cd6894f8d3c518cd1e549d5c6ec6/globalmount\"" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.778466 master-0 kubenswrapper[28149]: I0313 13:13:11.778404 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/100ea86f-4ced-4514-b314-8462958adf98-public-tls-certs\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.783107 master-0 kubenswrapper[28149]: I0313 13:13:11.782538 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100ea86f-4ced-4514-b314-8462958adf98-scripts\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.794265 master-0 kubenswrapper[28149]: I0313 13:13:11.794222 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6mch\" (UniqueName: \"kubernetes.io/projected/100ea86f-4ced-4514-b314-8462958adf98-kube-api-access-t6mch\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:11.795853 master-0 kubenswrapper[28149]: I0313 13:13:11.795824 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100ea86f-4ced-4514-b314-8462958adf98-config-data\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:12.642518 master-0 kubenswrapper[28149]: I0313 13:13:12.642448 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-12182b6b-d6bb-4e5f-ac3a-df190dba3645\" (UniqueName: \"kubernetes.io/csi/topolvm.io^46c3102e-7a0b-4e07-9a24-444142905798\") pod \"glance-e6fbd-default-external-api-0\" (UID: \"100ea86f-4ced-4514-b314-8462958adf98\") " pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:12.709922 master-0 kubenswrapper[28149]: I0313 13:13:12.709460 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9e03bf1-b908-4148-8838-f54eaa369e6a" path="/var/lib/kubelet/pods/f9e03bf1-b908-4148-8838-f54eaa369e6a/volumes" Mar 13 13:13:12.883924 master-0 kubenswrapper[28149]: I0313 13:13:12.883846 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:16.048076 master-0 kubenswrapper[28149]: I0313 13:13:16.048028 28149 scope.go:117] "RemoveContainer" containerID="b735b158ae0f1f81c167b2e3ec4bb07208ae9e3e1a523919c59da19d0ac89b38" Mar 13 13:13:16.109592 master-0 kubenswrapper[28149]: I0313 13:13:16.109532 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.223581 master-0 kubenswrapper[28149]: I0313 13:13:16.223509 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/611eba2b-39d1-43b8-bdce-7b7c5436180c-logs\") pod \"611eba2b-39d1-43b8-bdce-7b7c5436180c\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " Mar 13 13:13:16.223927 master-0 kubenswrapper[28149]: I0313 13:13:16.223724 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-config-data\") pod \"611eba2b-39d1-43b8-bdce-7b7c5436180c\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " Mar 13 13:13:16.223927 master-0 kubenswrapper[28149]: I0313 13:13:16.223784 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/611eba2b-39d1-43b8-bdce-7b7c5436180c-httpd-run\") pod \"611eba2b-39d1-43b8-bdce-7b7c5436180c\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " Mar 13 13:13:16.224023 master-0 kubenswrapper[28149]: I0313 13:13:16.223941 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"611eba2b-39d1-43b8-bdce-7b7c5436180c\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " Mar 13 13:13:16.224058 master-0 kubenswrapper[28149]: I0313 13:13:16.224042 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dgb6\" (UniqueName: \"kubernetes.io/projected/611eba2b-39d1-43b8-bdce-7b7c5436180c-kube-api-access-4dgb6\") pod \"611eba2b-39d1-43b8-bdce-7b7c5436180c\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " Mar 13 13:13:16.224240 master-0 kubenswrapper[28149]: I0313 13:13:16.224116 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-combined-ca-bundle\") pod \"611eba2b-39d1-43b8-bdce-7b7c5436180c\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " Mar 13 13:13:16.224240 master-0 kubenswrapper[28149]: I0313 13:13:16.224163 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/611eba2b-39d1-43b8-bdce-7b7c5436180c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "611eba2b-39d1-43b8-bdce-7b7c5436180c" (UID: "611eba2b-39d1-43b8-bdce-7b7c5436180c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:13:16.224240 master-0 kubenswrapper[28149]: I0313 13:13:16.224235 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-internal-tls-certs\") pod \"611eba2b-39d1-43b8-bdce-7b7c5436180c\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " Mar 13 13:13:16.224687 master-0 kubenswrapper[28149]: I0313 13:13:16.224636 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-scripts\") pod \"611eba2b-39d1-43b8-bdce-7b7c5436180c\" (UID: \"611eba2b-39d1-43b8-bdce-7b7c5436180c\") " Mar 13 13:13:16.225468 master-0 kubenswrapper[28149]: I0313 13:13:16.225425 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/611eba2b-39d1-43b8-bdce-7b7c5436180c-logs" (OuterVolumeSpecName: "logs") pod "611eba2b-39d1-43b8-bdce-7b7c5436180c" (UID: "611eba2b-39d1-43b8-bdce-7b7c5436180c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:13:16.226259 master-0 kubenswrapper[28149]: I0313 13:13:16.226225 28149 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/611eba2b-39d1-43b8-bdce-7b7c5436180c-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:16.226337 master-0 kubenswrapper[28149]: I0313 13:13:16.226261 28149 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/611eba2b-39d1-43b8-bdce-7b7c5436180c-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:16.227960 master-0 kubenswrapper[28149]: I0313 13:13:16.227910 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/611eba2b-39d1-43b8-bdce-7b7c5436180c-kube-api-access-4dgb6" (OuterVolumeSpecName: "kube-api-access-4dgb6") pod "611eba2b-39d1-43b8-bdce-7b7c5436180c" (UID: "611eba2b-39d1-43b8-bdce-7b7c5436180c"). InnerVolumeSpecName "kube-api-access-4dgb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:13:16.231239 master-0 kubenswrapper[28149]: I0313 13:13:16.231020 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-scripts" (OuterVolumeSpecName: "scripts") pod "611eba2b-39d1-43b8-bdce-7b7c5436180c" (UID: "611eba2b-39d1-43b8-bdce-7b7c5436180c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:16.252256 master-0 kubenswrapper[28149]: I0313 13:13:16.250187 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401" (OuterVolumeSpecName: "glance") pod "611eba2b-39d1-43b8-bdce-7b7c5436180c" (UID: "611eba2b-39d1-43b8-bdce-7b7c5436180c"). InnerVolumeSpecName "pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 13:13:16.279056 master-0 kubenswrapper[28149]: I0313 13:13:16.278970 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "611eba2b-39d1-43b8-bdce-7b7c5436180c" (UID: "611eba2b-39d1-43b8-bdce-7b7c5436180c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:16.288879 master-0 kubenswrapper[28149]: I0313 13:13:16.288822 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "611eba2b-39d1-43b8-bdce-7b7c5436180c" (UID: "611eba2b-39d1-43b8-bdce-7b7c5436180c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:16.310420 master-0 kubenswrapper[28149]: I0313 13:13:16.310304 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-config-data" (OuterVolumeSpecName: "config-data") pod "611eba2b-39d1-43b8-bdce-7b7c5436180c" (UID: "611eba2b-39d1-43b8-bdce-7b7c5436180c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:16.331251 master-0 kubenswrapper[28149]: I0313 13:13:16.328893 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:16.331251 master-0 kubenswrapper[28149]: I0313 13:13:16.328956 28149 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") on node \"master-0\" " Mar 13 13:13:16.331251 master-0 kubenswrapper[28149]: I0313 13:13:16.328972 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dgb6\" (UniqueName: \"kubernetes.io/projected/611eba2b-39d1-43b8-bdce-7b7c5436180c-kube-api-access-4dgb6\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:16.331251 master-0 kubenswrapper[28149]: I0313 13:13:16.328984 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:16.331251 master-0 kubenswrapper[28149]: I0313 13:13:16.328994 28149 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:16.331251 master-0 kubenswrapper[28149]: I0313 13:13:16.329001 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/611eba2b-39d1-43b8-bdce-7b7c5436180c-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:16.355895 master-0 kubenswrapper[28149]: I0313 13:13:16.355818 28149 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 13 13:13:16.356194 master-0 kubenswrapper[28149]: I0313 13:13:16.356166 28149 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3" (UniqueName: "kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401") on node "master-0" Mar 13 13:13:16.431635 master-0 kubenswrapper[28149]: I0313 13:13:16.431417 28149 reconciler_common.go:293] "Volume detached for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:16.511706 master-0 kubenswrapper[28149]: I0313 13:13:16.511496 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.511706 master-0 kubenswrapper[28149]: I0313 13:13:16.511475 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-internal-api-0" event={"ID":"611eba2b-39d1-43b8-bdce-7b7c5436180c","Type":"ContainerDied","Data":"c2864960ae9389fb085c5d1a6210d7d996524f4139d5a6368982290afff235ab"} Mar 13 13:13:16.584606 master-0 kubenswrapper[28149]: I0313 13:13:16.584552 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:13:16.596062 master-0 kubenswrapper[28149]: I0313 13:13:16.596017 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:13:16.606362 master-0 kubenswrapper[28149]: I0313 13:13:16.606293 28149 scope.go:117] "RemoveContainer" containerID="7705e241a1083d4cd9858d6d7c541bec846e153fa8108212316ee24486559c75" Mar 13 13:13:16.625326 master-0 kubenswrapper[28149]: I0313 13:13:16.623814 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:13:16.625326 master-0 kubenswrapper[28149]: E0313 13:13:16.625269 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="611eba2b-39d1-43b8-bdce-7b7c5436180c" containerName="glance-log" Mar 13 13:13:16.625326 master-0 kubenswrapper[28149]: I0313 13:13:16.625302 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="611eba2b-39d1-43b8-bdce-7b7c5436180c" containerName="glance-log" Mar 13 13:13:16.626439 master-0 kubenswrapper[28149]: E0313 13:13:16.625340 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="611eba2b-39d1-43b8-bdce-7b7c5436180c" containerName="glance-httpd" Mar 13 13:13:16.626439 master-0 kubenswrapper[28149]: I0313 13:13:16.625349 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="611eba2b-39d1-43b8-bdce-7b7c5436180c" containerName="glance-httpd" Mar 13 13:13:16.626439 master-0 kubenswrapper[28149]: I0313 13:13:16.625728 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="611eba2b-39d1-43b8-bdce-7b7c5436180c" containerName="glance-log" Mar 13 13:13:16.626439 master-0 kubenswrapper[28149]: I0313 13:13:16.625764 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="611eba2b-39d1-43b8-bdce-7b7c5436180c" containerName="glance-httpd" Mar 13 13:13:16.627481 master-0 kubenswrapper[28149]: I0313 13:13:16.627444 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.632969 master-0 kubenswrapper[28149]: I0313 13:13:16.632797 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-e6fbd-default-internal-config-data" Mar 13 13:13:16.633242 master-0 kubenswrapper[28149]: I0313 13:13:16.633209 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 13 13:13:16.643543 master-0 kubenswrapper[28149]: I0313 13:13:16.639534 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:13:16.680188 master-0 kubenswrapper[28149]: I0313 13:13:16.678763 28149 scope.go:117] "RemoveContainer" containerID="3605fb008c93b616a59c49c14ce99dd33736be58cdf499b88eb71ef7ba777d9a" Mar 13 13:13:16.742556 master-0 kubenswrapper[28149]: I0313 13:13:16.742499 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="611eba2b-39d1-43b8-bdce-7b7c5436180c" path="/var/lib/kubelet/pods/611eba2b-39d1-43b8-bdce-7b7c5436180c/volumes" Mar 13 13:13:16.744784 master-0 kubenswrapper[28149]: I0313 13:13:16.744730 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e769dff-9929-4c69-9dbd-2dad8c16f675-internal-tls-certs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.744966 master-0 kubenswrapper[28149]: I0313 13:13:16.744925 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e769dff-9929-4c69-9dbd-2dad8c16f675-logs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.745564 master-0 kubenswrapper[28149]: I0313 13:13:16.745496 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8m7x\" (UniqueName: \"kubernetes.io/projected/5e769dff-9929-4c69-9dbd-2dad8c16f675-kube-api-access-w8m7x\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.745718 master-0 kubenswrapper[28149]: I0313 13:13:16.745630 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e769dff-9929-4c69-9dbd-2dad8c16f675-scripts\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.752929 master-0 kubenswrapper[28149]: I0313 13:13:16.752850 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e769dff-9929-4c69-9dbd-2dad8c16f675-combined-ca-bundle\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.753355 master-0 kubenswrapper[28149]: I0313 13:13:16.752963 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e769dff-9929-4c69-9dbd-2dad8c16f675-httpd-run\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.753355 master-0 kubenswrapper[28149]: I0313 13:13:16.753126 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e769dff-9929-4c69-9dbd-2dad8c16f675-config-data\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.753355 master-0 kubenswrapper[28149]: I0313 13:13:16.753223 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.856361 master-0 kubenswrapper[28149]: I0313 13:13:16.856295 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e769dff-9929-4c69-9dbd-2dad8c16f675-internal-tls-certs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.856641 master-0 kubenswrapper[28149]: I0313 13:13:16.856500 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e769dff-9929-4c69-9dbd-2dad8c16f675-logs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.856722 master-0 kubenswrapper[28149]: I0313 13:13:16.856674 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8m7x\" (UniqueName: \"kubernetes.io/projected/5e769dff-9929-4c69-9dbd-2dad8c16f675-kube-api-access-w8m7x\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.856827 master-0 kubenswrapper[28149]: I0313 13:13:16.856759 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e769dff-9929-4c69-9dbd-2dad8c16f675-scripts\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.856949 master-0 kubenswrapper[28149]: I0313 13:13:16.856917 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e769dff-9929-4c69-9dbd-2dad8c16f675-combined-ca-bundle\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.857015 master-0 kubenswrapper[28149]: I0313 13:13:16.856995 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e769dff-9929-4c69-9dbd-2dad8c16f675-httpd-run\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.857127 master-0 kubenswrapper[28149]: I0313 13:13:16.857068 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e769dff-9929-4c69-9dbd-2dad8c16f675-config-data\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.857600 master-0 kubenswrapper[28149]: I0313 13:13:16.857126 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.859590 master-0 kubenswrapper[28149]: I0313 13:13:16.859390 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e769dff-9929-4c69-9dbd-2dad8c16f675-logs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.860350 master-0 kubenswrapper[28149]: I0313 13:13:16.860251 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e769dff-9929-4c69-9dbd-2dad8c16f675-httpd-run\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.869385 master-0 kubenswrapper[28149]: I0313 13:13:16.863544 28149 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 13:13:16.869385 master-0 kubenswrapper[28149]: I0313 13:13:16.863593 28149 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/820c05b4f9c429c0b1c354ead4f7cbf32abe82e5746431ea131598f6d233206f/globalmount\"" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.869385 master-0 kubenswrapper[28149]: I0313 13:13:16.865586 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e769dff-9929-4c69-9dbd-2dad8c16f675-combined-ca-bundle\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.869385 master-0 kubenswrapper[28149]: I0313 13:13:16.867907 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e769dff-9929-4c69-9dbd-2dad8c16f675-internal-tls-certs\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.869385 master-0 kubenswrapper[28149]: I0313 13:13:16.868403 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e769dff-9929-4c69-9dbd-2dad8c16f675-scripts\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.869385 master-0 kubenswrapper[28149]: I0313 13:13:16.868724 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e769dff-9929-4c69-9dbd-2dad8c16f675-config-data\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:16.880946 master-0 kubenswrapper[28149]: I0313 13:13:16.880902 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8m7x\" (UniqueName: \"kubernetes.io/projected/5e769dff-9929-4c69-9dbd-2dad8c16f675-kube-api-access-w8m7x\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:17.188636 master-0 kubenswrapper[28149]: I0313 13:13:17.167497 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 13:13:17.363491 master-0 kubenswrapper[28149]: I0313 13:13:17.363415 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bfb994cb5-frl54"] Mar 13 13:13:17.437959 master-0 kubenswrapper[28149]: I0313 13:13:17.437797 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6fbd-default-external-api-0"] Mar 13 13:13:17.440972 master-0 kubenswrapper[28149]: W0313 13:13:17.440852 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod100ea86f_4ced_4514_b314_8462958adf98.slice/crio-ad01682164d94d71b3216ed472bf453db7b6bb95d5ec01709dcf1dc252dab528 WatchSource:0}: Error finding container ad01682164d94d71b3216ed472bf453db7b6bb95d5ec01709dcf1dc252dab528: Status 404 returned error can't find the container with id ad01682164d94d71b3216ed472bf453db7b6bb95d5ec01709dcf1dc252dab528 Mar 13 13:13:17.535419 master-0 kubenswrapper[28149]: I0313 13:13:17.534627 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-external-api-0" event={"ID":"100ea86f-4ced-4514-b314-8462958adf98","Type":"ContainerStarted","Data":"ad01682164d94d71b3216ed472bf453db7b6bb95d5ec01709dcf1dc252dab528"} Mar 13 13:13:17.549358 master-0 kubenswrapper[28149]: I0313 13:13:17.549305 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bfb994cb5-frl54" event={"ID":"a41644d0-10a5-4e06-8da5-15690e85b5a3","Type":"ContainerStarted","Data":"3028ae800f33dbca5618a2a92ffa52cb9ac6043d15feb37e8fe345bd8ddc3808"} Mar 13 13:13:17.553947 master-0 kubenswrapper[28149]: I0313 13:13:17.553895 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xbkk4" event={"ID":"05ef3745-1126-43ed-bc8b-f7be6477ff30","Type":"ContainerStarted","Data":"16f6f19db88f52b5f12bc43163f6123f34a41042b8b8ed2a748db89eb6839aee"} Mar 13 13:13:17.556837 master-0 kubenswrapper[28149]: I0313 13:13:17.556765 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"27f4e8a7-5136-4fbe-a689-4f2071d2480d","Type":"ContainerStarted","Data":"50fb821800b842f8a8d38d9ca5ccaedb17631d6115b22be4e3f580be713b1eb3"} Mar 13 13:13:17.759167 master-0 kubenswrapper[28149]: I0313 13:13:17.759092 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fbfb8b97-dcb8-43d8-a7ca-10f6eee24ec3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eb822992-ec5e-49c3-b53d-b596568ce401\") pod \"glance-e6fbd-default-internal-api-0\" (UID: \"5e769dff-9929-4c69-9dbd-2dad8c16f675\") " pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:17.880098 master-0 kubenswrapper[28149]: I0313 13:13:17.880040 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:18.577876 master-0 kubenswrapper[28149]: I0313 13:13:18.577759 28149 generic.go:334] "Generic (PLEG): container finished" podID="a41644d0-10a5-4e06-8da5-15690e85b5a3" containerID="dc8b4a5faa7f01895e44db8a6a56e24d21b2f0cd254b74ad984495d063ee75b0" exitCode=0 Mar 13 13:13:18.577876 master-0 kubenswrapper[28149]: I0313 13:13:18.577833 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bfb994cb5-frl54" event={"ID":"a41644d0-10a5-4e06-8da5-15690e85b5a3","Type":"ContainerDied","Data":"dc8b4a5faa7f01895e44db8a6a56e24d21b2f0cd254b74ad984495d063ee75b0"} Mar 13 13:13:18.584281 master-0 kubenswrapper[28149]: I0313 13:13:18.584203 28149 generic.go:334] "Generic (PLEG): container finished" podID="27f4e8a7-5136-4fbe-a689-4f2071d2480d" containerID="b3650536c94cf2caf7e21179cb4c07c19f4e1c42faea03907ec74e96bd81bfc5" exitCode=0 Mar 13 13:13:18.584417 master-0 kubenswrapper[28149]: I0313 13:13:18.584316 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"27f4e8a7-5136-4fbe-a689-4f2071d2480d","Type":"ContainerDied","Data":"b3650536c94cf2caf7e21179cb4c07c19f4e1c42faea03907ec74e96bd81bfc5"} Mar 13 13:13:18.590484 master-0 kubenswrapper[28149]: I0313 13:13:18.589863 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-external-api-0" event={"ID":"100ea86f-4ced-4514-b314-8462958adf98","Type":"ContainerStarted","Data":"8614a1a3ef21a827154651105d89f733711f127a208a03c76cf14ad0ede0c77f"} Mar 13 13:13:18.593704 master-0 kubenswrapper[28149]: I0313 13:13:18.593640 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8fdaa161-cf3d-465a-8e70-c2af73f96711","Type":"ContainerStarted","Data":"76d8cdffdd21a95fb826b5a60cfc58f6f03d06eb91012ff4400ed7101ec56c05"} Mar 13 13:13:18.678773 master-0 kubenswrapper[28149]: I0313 13:13:18.671421 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xbkk4" podStartSLOduration=3.145903194 podStartE2EDuration="17.671395393s" podCreationTimestamp="2026-03-13 13:13:01 +0000 UTC" firstStartedPulling="2026-03-13 13:13:02.252282507 +0000 UTC m=+1155.905747666" lastFinishedPulling="2026-03-13 13:13:16.777774706 +0000 UTC m=+1170.431239865" observedRunningTime="2026-03-13 13:13:17.586240403 +0000 UTC m=+1171.239705682" watchObservedRunningTime="2026-03-13 13:13:18.671395393 +0000 UTC m=+1172.324860552" Mar 13 13:13:18.759376 master-0 kubenswrapper[28149]: W0313 13:13:18.758438 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e769dff_9929_4c69_9dbd_2dad8c16f675.slice/crio-eb79c2cb72278b2a1e2916bbfeef5f9813ff5f7325b4503f559fb0c2be6843d3 WatchSource:0}: Error finding container eb79c2cb72278b2a1e2916bbfeef5f9813ff5f7325b4503f559fb0c2be6843d3: Status 404 returned error can't find the container with id eb79c2cb72278b2a1e2916bbfeef5f9813ff5f7325b4503f559fb0c2be6843d3 Mar 13 13:13:18.824171 master-0 kubenswrapper[28149]: I0313 13:13:18.817013 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e6fbd-default-internal-api-0"] Mar 13 13:13:19.263550 master-0 kubenswrapper[28149]: I0313 13:13:19.263506 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 13 13:13:19.427385 master-0 kubenswrapper[28149]: I0313 13:13:19.427331 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/27f4e8a7-5136-4fbe-a689-4f2071d2480d-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " Mar 13 13:13:19.427584 master-0 kubenswrapper[28149]: I0313 13:13:19.427487 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-scripts\") pod \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " Mar 13 13:13:19.427655 master-0 kubenswrapper[28149]: I0313 13:13:19.427637 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/27f4e8a7-5136-4fbe-a689-4f2071d2480d-var-lib-ironic\") pod \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " Mar 13 13:13:19.428296 master-0 kubenswrapper[28149]: I0313 13:13:19.427708 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-combined-ca-bundle\") pod \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " Mar 13 13:13:19.428296 master-0 kubenswrapper[28149]: I0313 13:13:19.428235 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27f4e8a7-5136-4fbe-a689-4f2071d2480d-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "27f4e8a7-5136-4fbe-a689-4f2071d2480d" (UID: "27f4e8a7-5136-4fbe-a689-4f2071d2480d"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:13:19.428406 master-0 kubenswrapper[28149]: I0313 13:13:19.428299 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-config\") pod \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " Mar 13 13:13:19.428440 master-0 kubenswrapper[28149]: I0313 13:13:19.428406 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65n62\" (UniqueName: \"kubernetes.io/projected/27f4e8a7-5136-4fbe-a689-4f2071d2480d-kube-api-access-65n62\") pod \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " Mar 13 13:13:19.428888 master-0 kubenswrapper[28149]: I0313 13:13:19.428490 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/27f4e8a7-5136-4fbe-a689-4f2071d2480d-etc-podinfo\") pod \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\" (UID: \"27f4e8a7-5136-4fbe-a689-4f2071d2480d\") " Mar 13 13:13:19.430064 master-0 kubenswrapper[28149]: I0313 13:13:19.428490 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27f4e8a7-5136-4fbe-a689-4f2071d2480d-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "27f4e8a7-5136-4fbe-a689-4f2071d2480d" (UID: "27f4e8a7-5136-4fbe-a689-4f2071d2480d"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:13:19.430064 master-0 kubenswrapper[28149]: I0313 13:13:19.430007 28149 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/27f4e8a7-5136-4fbe-a689-4f2071d2480d-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:19.430064 master-0 kubenswrapper[28149]: I0313 13:13:19.430033 28149 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/27f4e8a7-5136-4fbe-a689-4f2071d2480d-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:19.432192 master-0 kubenswrapper[28149]: I0313 13:13:19.431586 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-config" (OuterVolumeSpecName: "config") pod "27f4e8a7-5136-4fbe-a689-4f2071d2480d" (UID: "27f4e8a7-5136-4fbe-a689-4f2071d2480d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:19.447387 master-0 kubenswrapper[28149]: I0313 13:13:19.432707 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-scripts" (OuterVolumeSpecName: "scripts") pod "27f4e8a7-5136-4fbe-a689-4f2071d2480d" (UID: "27f4e8a7-5136-4fbe-a689-4f2071d2480d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:19.447387 master-0 kubenswrapper[28149]: I0313 13:13:19.435426 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/27f4e8a7-5136-4fbe-a689-4f2071d2480d-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "27f4e8a7-5136-4fbe-a689-4f2071d2480d" (UID: "27f4e8a7-5136-4fbe-a689-4f2071d2480d"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 13 13:13:19.447387 master-0 kubenswrapper[28149]: I0313 13:13:19.435717 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f4e8a7-5136-4fbe-a689-4f2071d2480d-kube-api-access-65n62" (OuterVolumeSpecName: "kube-api-access-65n62") pod "27f4e8a7-5136-4fbe-a689-4f2071d2480d" (UID: "27f4e8a7-5136-4fbe-a689-4f2071d2480d"). InnerVolumeSpecName "kube-api-access-65n62". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:13:19.460215 master-0 kubenswrapper[28149]: I0313 13:13:19.455549 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27f4e8a7-5136-4fbe-a689-4f2071d2480d" (UID: "27f4e8a7-5136-4fbe-a689-4f2071d2480d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:19.533394 master-0 kubenswrapper[28149]: I0313 13:13:19.533341 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:19.533394 master-0 kubenswrapper[28149]: I0313 13:13:19.533387 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:19.533394 master-0 kubenswrapper[28149]: I0313 13:13:19.533398 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/27f4e8a7-5136-4fbe-a689-4f2071d2480d-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:19.533573 master-0 kubenswrapper[28149]: I0313 13:13:19.533410 28149 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/27f4e8a7-5136-4fbe-a689-4f2071d2480d-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:19.533573 master-0 kubenswrapper[28149]: I0313 13:13:19.533421 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65n62\" (UniqueName: \"kubernetes.io/projected/27f4e8a7-5136-4fbe-a689-4f2071d2480d-kube-api-access-65n62\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:19.647502 master-0 kubenswrapper[28149]: I0313 13:13:19.646229 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bfb994cb5-frl54" event={"ID":"a41644d0-10a5-4e06-8da5-15690e85b5a3","Type":"ContainerStarted","Data":"c5167e2ff12777fcacfa1b487a1191c2c2d458d61b55c98d4799c4ab3ac01275"} Mar 13 13:13:19.647502 master-0 kubenswrapper[28149]: I0313 13:13:19.646562 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:19.650200 master-0 kubenswrapper[28149]: I0313 13:13:19.650160 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"27f4e8a7-5136-4fbe-a689-4f2071d2480d","Type":"ContainerDied","Data":"50fb821800b842f8a8d38d9ca5ccaedb17631d6115b22be4e3f580be713b1eb3"} Mar 13 13:13:19.650295 master-0 kubenswrapper[28149]: I0313 13:13:19.650214 28149 scope.go:117] "RemoveContainer" containerID="b3650536c94cf2caf7e21179cb4c07c19f4e1c42faea03907ec74e96bd81bfc5" Mar 13 13:13:19.651688 master-0 kubenswrapper[28149]: I0313 13:13:19.650381 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 13 13:13:19.656803 master-0 kubenswrapper[28149]: I0313 13:13:19.655390 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-internal-api-0" event={"ID":"5e769dff-9929-4c69-9dbd-2dad8c16f675","Type":"ContainerStarted","Data":"e7f5a277c63d4c6e014656f80a0f36b53b48e68cf07f0ec866fb004180b005ce"} Mar 13 13:13:19.656803 master-0 kubenswrapper[28149]: I0313 13:13:19.655438 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-internal-api-0" event={"ID":"5e769dff-9929-4c69-9dbd-2dad8c16f675","Type":"ContainerStarted","Data":"eb79c2cb72278b2a1e2916bbfeef5f9813ff5f7325b4503f559fb0c2be6843d3"} Mar 13 13:13:19.665087 master-0 kubenswrapper[28149]: I0313 13:13:19.664118 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-external-api-0" event={"ID":"100ea86f-4ced-4514-b314-8462958adf98","Type":"ContainerStarted","Data":"d33a7df8d2e3f964e451b1ebdd9bd0af8fed3cab7c2d8dbdd441d194c8313b6d"} Mar 13 13:13:19.683594 master-0 kubenswrapper[28149]: I0313 13:13:19.683480 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bfb994cb5-frl54" podStartSLOduration=15.683457044 podStartE2EDuration="15.683457044s" podCreationTimestamp="2026-03-13 13:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:13:19.674454801 +0000 UTC m=+1173.327919970" watchObservedRunningTime="2026-03-13 13:13:19.683457044 +0000 UTC m=+1173.336922203" Mar 13 13:13:19.860377 master-0 kubenswrapper[28149]: I0313 13:13:19.860324 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 13:13:19.949686 master-0 kubenswrapper[28149]: I0313 13:13:19.947713 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 13:13:19.968913 master-0 kubenswrapper[28149]: I0313 13:13:19.968822 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e6fbd-default-external-api-0" podStartSLOduration=8.968795769 podStartE2EDuration="8.968795769s" podCreationTimestamp="2026-03-13 13:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:13:19.878880367 +0000 UTC m=+1173.532345536" watchObservedRunningTime="2026-03-13 13:13:19.968795769 +0000 UTC m=+1173.622260938" Mar 13 13:13:19.996378 master-0 kubenswrapper[28149]: I0313 13:13:19.996298 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 13:13:19.997017 master-0 kubenswrapper[28149]: E0313 13:13:19.996983 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f4e8a7-5136-4fbe-a689-4f2071d2480d" containerName="ironic-python-agent-init" Mar 13 13:13:19.997017 master-0 kubenswrapper[28149]: I0313 13:13:19.997011 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f4e8a7-5136-4fbe-a689-4f2071d2480d" containerName="ironic-python-agent-init" Mar 13 13:13:19.997456 master-0 kubenswrapper[28149]: I0313 13:13:19.997384 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f4e8a7-5136-4fbe-a689-4f2071d2480d" containerName="ironic-python-agent-init" Mar 13 13:13:20.003247 master-0 kubenswrapper[28149]: I0313 13:13:20.001467 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 13 13:13:20.006050 master-0 kubenswrapper[28149]: I0313 13:13:20.005744 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 13 13:13:20.006050 master-0 kubenswrapper[28149]: I0313 13:13:20.005933 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Mar 13 13:13:20.006239 master-0 kubenswrapper[28149]: I0313 13:13:20.006087 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 13 13:13:20.006239 master-0 kubenswrapper[28149]: I0313 13:13:20.006233 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 13 13:13:20.007745 master-0 kubenswrapper[28149]: I0313 13:13:20.006370 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Mar 13 13:13:20.012245 master-0 kubenswrapper[28149]: I0313 13:13:20.011938 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 13:13:20.147866 master-0 kubenswrapper[28149]: I0313 13:13:20.147792 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e5e1768-d77e-460a-996b-965dbb4e8920-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.147866 master-0 kubenswrapper[28149]: I0313 13:13:20.147867 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e5e1768-d77e-460a-996b-965dbb4e8920-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.148256 master-0 kubenswrapper[28149]: I0313 13:13:20.147964 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngg5k\" (UniqueName: \"kubernetes.io/projected/5e5e1768-d77e-460a-996b-965dbb4e8920-kube-api-access-ngg5k\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.148256 master-0 kubenswrapper[28149]: I0313 13:13:20.148035 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e5e1768-d77e-460a-996b-965dbb4e8920-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.148360 master-0 kubenswrapper[28149]: I0313 13:13:20.148253 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/5e5e1768-d77e-460a-996b-965dbb4e8920-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.148360 master-0 kubenswrapper[28149]: I0313 13:13:20.148314 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/5e5e1768-d77e-460a-996b-965dbb4e8920-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.148497 master-0 kubenswrapper[28149]: I0313 13:13:20.148438 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5e5e1768-d77e-460a-996b-965dbb4e8920-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.148497 master-0 kubenswrapper[28149]: I0313 13:13:20.148470 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e5e1768-d77e-460a-996b-965dbb4e8920-scripts\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.148605 master-0 kubenswrapper[28149]: I0313 13:13:20.148549 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e5e1768-d77e-460a-996b-965dbb4e8920-config\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.250357 master-0 kubenswrapper[28149]: I0313 13:13:20.250285 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e5e1768-d77e-460a-996b-965dbb4e8920-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.250603 master-0 kubenswrapper[28149]: I0313 13:13:20.250454 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/5e5e1768-d77e-460a-996b-965dbb4e8920-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.250603 master-0 kubenswrapper[28149]: I0313 13:13:20.250497 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/5e5e1768-d77e-460a-996b-965dbb4e8920-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.250603 master-0 kubenswrapper[28149]: I0313 13:13:20.250567 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5e5e1768-d77e-460a-996b-965dbb4e8920-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.250603 master-0 kubenswrapper[28149]: I0313 13:13:20.250586 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e5e1768-d77e-460a-996b-965dbb4e8920-scripts\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.250949 master-0 kubenswrapper[28149]: I0313 13:13:20.250633 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e5e1768-d77e-460a-996b-965dbb4e8920-config\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.250949 master-0 kubenswrapper[28149]: I0313 13:13:20.250663 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e5e1768-d77e-460a-996b-965dbb4e8920-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.250949 master-0 kubenswrapper[28149]: I0313 13:13:20.250693 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e5e1768-d77e-460a-996b-965dbb4e8920-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.250949 master-0 kubenswrapper[28149]: I0313 13:13:20.250750 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngg5k\" (UniqueName: \"kubernetes.io/projected/5e5e1768-d77e-460a-996b-965dbb4e8920-kube-api-access-ngg5k\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.252482 master-0 kubenswrapper[28149]: I0313 13:13:20.252453 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/5e5e1768-d77e-460a-996b-965dbb4e8920-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.252683 master-0 kubenswrapper[28149]: I0313 13:13:20.252661 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/5e5e1768-d77e-460a-996b-965dbb4e8920-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.255706 master-0 kubenswrapper[28149]: I0313 13:13:20.255676 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5e5e1768-d77e-460a-996b-965dbb4e8920-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.259280 master-0 kubenswrapper[28149]: I0313 13:13:20.259233 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e5e1768-d77e-460a-996b-965dbb4e8920-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.263233 master-0 kubenswrapper[28149]: I0313 13:13:20.263198 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e5e1768-d77e-460a-996b-965dbb4e8920-scripts\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.263694 master-0 kubenswrapper[28149]: I0313 13:13:20.263652 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e5e1768-d77e-460a-996b-965dbb4e8920-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.264885 master-0 kubenswrapper[28149]: I0313 13:13:20.264842 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e5e1768-d77e-460a-996b-965dbb4e8920-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.266761 master-0 kubenswrapper[28149]: I0313 13:13:20.266684 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e5e1768-d77e-460a-996b-965dbb4e8920-config\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.277096 master-0 kubenswrapper[28149]: I0313 13:13:20.277043 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngg5k\" (UniqueName: \"kubernetes.io/projected/5e5e1768-d77e-460a-996b-965dbb4e8920-kube-api-access-ngg5k\") pod \"ironic-inspector-0\" (UID: \"5e5e1768-d77e-460a-996b-965dbb4e8920\") " pod="openstack/ironic-inspector-0" Mar 13 13:13:20.374522 master-0 kubenswrapper[28149]: I0313 13:13:20.374455 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 13 13:13:20.775774 master-0 kubenswrapper[28149]: I0313 13:13:20.763888 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27f4e8a7-5136-4fbe-a689-4f2071d2480d" path="/var/lib/kubelet/pods/27f4e8a7-5136-4fbe-a689-4f2071d2480d/volumes" Mar 13 13:13:20.775774 master-0 kubenswrapper[28149]: I0313 13:13:20.764692 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e6fbd-default-internal-api-0" event={"ID":"5e769dff-9929-4c69-9dbd-2dad8c16f675","Type":"ContainerStarted","Data":"30dcf52aa360331f70fbe468f3e5ea149f1bcefc5e972cdb974ee17268e4a9e3"} Mar 13 13:13:20.801501 master-0 kubenswrapper[28149]: I0313 13:13:20.801313 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e6fbd-default-internal-api-0" podStartSLOduration=4.801289954 podStartE2EDuration="4.801289954s" podCreationTimestamp="2026-03-13 13:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:13:20.789585289 +0000 UTC m=+1174.443050448" watchObservedRunningTime="2026-03-13 13:13:20.801289954 +0000 UTC m=+1174.454755113" Mar 13 13:13:21.339107 master-0 kubenswrapper[28149]: I0313 13:13:21.339054 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 13:13:21.784857 master-0 kubenswrapper[28149]: I0313 13:13:21.784804 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5e5e1768-d77e-460a-996b-965dbb4e8920","Type":"ContainerStarted","Data":"2c9482cf5a12f6a23cb08a40367c8e6ca01a37c6c3c6cdee06eeeee271d7416a"} Mar 13 13:13:22.868171 master-0 kubenswrapper[28149]: I0313 13:13:22.861937 28149 generic.go:334] "Generic (PLEG): container finished" podID="5e5e1768-d77e-460a-996b-965dbb4e8920" containerID="6e91a192c44a3e3ebce13361e81f84805595ce511ce02d97e25f7fbe8bfa150e" exitCode=0 Mar 13 13:13:22.868171 master-0 kubenswrapper[28149]: I0313 13:13:22.862016 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5e5e1768-d77e-460a-996b-965dbb4e8920","Type":"ContainerDied","Data":"6e91a192c44a3e3ebce13361e81f84805595ce511ce02d97e25f7fbe8bfa150e"} Mar 13 13:13:22.896565 master-0 kubenswrapper[28149]: I0313 13:13:22.894195 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:22.903286 master-0 kubenswrapper[28149]: I0313 13:13:22.897878 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:22.977469 master-0 kubenswrapper[28149]: I0313 13:13:22.975056 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:23.010165 master-0 kubenswrapper[28149]: I0313 13:13:23.007621 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:24.006106 master-0 kubenswrapper[28149]: I0313 13:13:24.006055 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:24.006716 master-0 kubenswrapper[28149]: I0313 13:13:24.006701 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:24.554247 master-0 kubenswrapper[28149]: I0313 13:13:24.553633 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:13:25.033024 master-0 kubenswrapper[28149]: I0313 13:13:25.031153 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5e5e1768-d77e-460a-996b-965dbb4e8920","Type":"ContainerStarted","Data":"183e25903db41c8a25644e610c7020fa51ce9a76493fd535fb38b2f3dedc87f6"} Mar 13 13:13:26.048212 master-0 kubenswrapper[28149]: I0313 13:13:26.044706 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 13:13:26.048212 master-0 kubenswrapper[28149]: I0313 13:13:26.044738 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 13:13:26.813375 master-0 kubenswrapper[28149]: I0313 13:13:26.813312 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b494447c-js6kq"] Mar 13 13:13:26.814108 master-0 kubenswrapper[28149]: I0313 13:13:26.814039 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b494447c-js6kq" podUID="e3d4527d-7eb8-488f-bcc7-8c6bd2be3351" containerName="dnsmasq-dns" containerID="cri-o://41e944b5a3a665882314566d7961c0aceb5f3ff1a1b5eccece67f417667863d7" gracePeriod=10 Mar 13 13:13:27.136381 master-0 kubenswrapper[28149]: I0313 13:13:27.136069 28149 generic.go:334] "Generic (PLEG): container finished" podID="5e5e1768-d77e-460a-996b-965dbb4e8920" containerID="183e25903db41c8a25644e610c7020fa51ce9a76493fd535fb38b2f3dedc87f6" exitCode=0 Mar 13 13:13:27.136381 master-0 kubenswrapper[28149]: I0313 13:13:27.136174 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5e5e1768-d77e-460a-996b-965dbb4e8920","Type":"ContainerDied","Data":"183e25903db41c8a25644e610c7020fa51ce9a76493fd535fb38b2f3dedc87f6"} Mar 13 13:13:27.882240 master-0 kubenswrapper[28149]: I0313 13:13:27.881596 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:27.882240 master-0 kubenswrapper[28149]: I0313 13:13:27.881677 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:28.213263 master-0 kubenswrapper[28149]: I0313 13:13:28.212422 28149 generic.go:334] "Generic (PLEG): container finished" podID="e3d4527d-7eb8-488f-bcc7-8c6bd2be3351" containerID="41e944b5a3a665882314566d7961c0aceb5f3ff1a1b5eccece67f417667863d7" exitCode=0 Mar 13 13:13:28.213263 master-0 kubenswrapper[28149]: I0313 13:13:28.212472 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b494447c-js6kq" event={"ID":"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351","Type":"ContainerDied","Data":"41e944b5a3a665882314566d7961c0aceb5f3ff1a1b5eccece67f417667863d7"} Mar 13 13:13:28.280387 master-0 kubenswrapper[28149]: I0313 13:13:28.273459 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:28.280387 master-0 kubenswrapper[28149]: I0313 13:13:28.274734 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:28.311977 master-0 kubenswrapper[28149]: I0313 13:13:28.300456 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:28.445246 master-0 kubenswrapper[28149]: I0313 13:13:28.445037 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:13:28.799647 master-0 kubenswrapper[28149]: I0313 13:13:28.798351 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtcqf\" (UniqueName: \"kubernetes.io/projected/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-kube-api-access-vtcqf\") pod \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " Mar 13 13:13:28.799647 master-0 kubenswrapper[28149]: I0313 13:13:28.798477 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-dns-swift-storage-0\") pod \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " Mar 13 13:13:28.799647 master-0 kubenswrapper[28149]: I0313 13:13:28.798637 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-ovsdbserver-sb\") pod \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " Mar 13 13:13:28.812433 master-0 kubenswrapper[28149]: I0313 13:13:28.811719 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-ovsdbserver-nb\") pod \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " Mar 13 13:13:28.812433 master-0 kubenswrapper[28149]: I0313 13:13:28.811880 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-config\") pod \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " Mar 13 13:13:28.812785 master-0 kubenswrapper[28149]: I0313 13:13:28.812629 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-dns-svc\") pod \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\" (UID: \"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351\") " Mar 13 13:13:28.819979 master-0 kubenswrapper[28149]: I0313 13:13:28.819921 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-kube-api-access-vtcqf" (OuterVolumeSpecName: "kube-api-access-vtcqf") pod "e3d4527d-7eb8-488f-bcc7-8c6bd2be3351" (UID: "e3d4527d-7eb8-488f-bcc7-8c6bd2be3351"). InnerVolumeSpecName "kube-api-access-vtcqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:13:28.890539 master-0 kubenswrapper[28149]: I0313 13:13:28.886463 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e3d4527d-7eb8-488f-bcc7-8c6bd2be3351" (UID: "e3d4527d-7eb8-488f-bcc7-8c6bd2be3351"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:13:28.917442 master-0 kubenswrapper[28149]: I0313 13:13:28.917379 28149 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:28.917811 master-0 kubenswrapper[28149]: I0313 13:13:28.917777 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtcqf\" (UniqueName: \"kubernetes.io/projected/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-kube-api-access-vtcqf\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:28.926303 master-0 kubenswrapper[28149]: I0313 13:13:28.925455 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e3d4527d-7eb8-488f-bcc7-8c6bd2be3351" (UID: "e3d4527d-7eb8-488f-bcc7-8c6bd2be3351"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:13:28.931439 master-0 kubenswrapper[28149]: I0313 13:13:28.931352 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-config" (OuterVolumeSpecName: "config") pod "e3d4527d-7eb8-488f-bcc7-8c6bd2be3351" (UID: "e3d4527d-7eb8-488f-bcc7-8c6bd2be3351"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:13:28.940066 master-0 kubenswrapper[28149]: I0313 13:13:28.939989 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e3d4527d-7eb8-488f-bcc7-8c6bd2be3351" (UID: "e3d4527d-7eb8-488f-bcc7-8c6bd2be3351"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:13:28.948663 master-0 kubenswrapper[28149]: I0313 13:13:28.948589 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e3d4527d-7eb8-488f-bcc7-8c6bd2be3351" (UID: "e3d4527d-7eb8-488f-bcc7-8c6bd2be3351"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:13:29.020155 master-0 kubenswrapper[28149]: I0313 13:13:29.020055 28149 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:29.020155 master-0 kubenswrapper[28149]: I0313 13:13:29.020118 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:29.020155 master-0 kubenswrapper[28149]: I0313 13:13:29.020134 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:29.020155 master-0 kubenswrapper[28149]: I0313 13:13:29.020170 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:29.232967 master-0 kubenswrapper[28149]: I0313 13:13:29.232901 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b494447c-js6kq" event={"ID":"e3d4527d-7eb8-488f-bcc7-8c6bd2be3351","Type":"ContainerDied","Data":"349a6d0dcd3e766f2c0ab69da701d19a276ae70393994f71f6734dc0179d3dec"} Mar 13 13:13:29.232967 master-0 kubenswrapper[28149]: I0313 13:13:29.232944 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b494447c-js6kq" Mar 13 13:13:29.233589 master-0 kubenswrapper[28149]: I0313 13:13:29.233017 28149 scope.go:117] "RemoveContainer" containerID="41e944b5a3a665882314566d7961c0aceb5f3ff1a1b5eccece67f417667863d7" Mar 13 13:13:29.241943 master-0 kubenswrapper[28149]: I0313 13:13:29.241888 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5e5e1768-d77e-460a-996b-965dbb4e8920","Type":"ContainerStarted","Data":"18295bbb37ae234400f4a50fe583bbbba4efd16d2ff3bcd434be5326847f5b34"} Mar 13 13:13:29.242131 master-0 kubenswrapper[28149]: I0313 13:13:29.241950 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:29.277329 master-0 kubenswrapper[28149]: I0313 13:13:29.277046 28149 scope.go:117] "RemoveContainer" containerID="c0c546ec0079f497909d97704c6b31fac8273a6fc1c7a904d8fa831d5f497489" Mar 13 13:13:30.119635 master-0 kubenswrapper[28149]: I0313 13:13:30.119458 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b494447c-js6kq"] Mar 13 13:13:30.209162 master-0 kubenswrapper[28149]: I0313 13:13:30.208507 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b494447c-js6kq"] Mar 13 13:13:30.254579 master-0 kubenswrapper[28149]: I0313 13:13:30.254436 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 13:13:30.864631 master-0 kubenswrapper[28149]: I0313 13:13:30.864575 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3d4527d-7eb8-488f-bcc7-8c6bd2be3351" path="/var/lib/kubelet/pods/e3d4527d-7eb8-488f-bcc7-8c6bd2be3351/volumes" Mar 13 13:13:31.797378 master-0 kubenswrapper[28149]: I0313 13:13:31.797045 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 13:13:31.797378 master-0 kubenswrapper[28149]: I0313 13:13:31.797075 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 13:13:31.798063 master-0 kubenswrapper[28149]: I0313 13:13:31.797890 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5e5e1768-d77e-460a-996b-965dbb4e8920","Type":"ContainerStarted","Data":"2ea0ecdc0e73b26cc9ff92978ea4b49162dce0f02e3facc85690fb3507cf7423"} Mar 13 13:13:32.600357 master-0 kubenswrapper[28149]: I0313 13:13:32.599626 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:32.707625 master-0 kubenswrapper[28149]: I0313 13:13:32.707428 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e6fbd-default-internal-api-0" Mar 13 13:13:32.886393 master-0 kubenswrapper[28149]: I0313 13:13:32.885821 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5e5e1768-d77e-460a-996b-965dbb4e8920","Type":"ContainerStarted","Data":"9dda44122e959be75654557c4ffeda8b7104cedaab02fa6b34554376a3501cc4"} Mar 13 13:13:32.886393 master-0 kubenswrapper[28149]: I0313 13:13:32.886297 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:32.886393 master-0 kubenswrapper[28149]: I0313 13:13:32.886324 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5e5e1768-d77e-460a-996b-965dbb4e8920","Type":"ContainerStarted","Data":"b9174de50f718aeba2a20f855f7744cfb77d5d05ac1af1d3f3da57c514766e7c"} Mar 13 13:13:32.886393 master-0 kubenswrapper[28149]: I0313 13:13:32.886428 28149 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 13:13:33.491335 master-0 kubenswrapper[28149]: I0313 13:13:33.486185 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e6fbd-default-external-api-0" Mar 13 13:13:34.925459 master-0 kubenswrapper[28149]: I0313 13:13:34.925351 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5e5e1768-d77e-460a-996b-965dbb4e8920","Type":"ContainerStarted","Data":"9ec5b01657ef3ea9307ed5b427f3cbb42b3136dc317e9795a3a0a0035be45180"} Mar 13 13:13:34.926189 master-0 kubenswrapper[28149]: I0313 13:13:34.925654 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 13 13:13:34.926189 master-0 kubenswrapper[28149]: I0313 13:13:34.925709 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 13 13:13:34.984798 master-0 kubenswrapper[28149]: I0313 13:13:34.983847 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=15.983770774 podStartE2EDuration="15.983770774s" podCreationTimestamp="2026-03-13 13:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:13:34.968313077 +0000 UTC m=+1188.621778256" watchObservedRunningTime="2026-03-13 13:13:34.983770774 +0000 UTC m=+1188.637235933" Mar 13 13:13:35.383173 master-0 kubenswrapper[28149]: I0313 13:13:35.380783 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 13 13:13:35.383173 master-0 kubenswrapper[28149]: I0313 13:13:35.380856 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 13 13:13:36.965877 master-0 kubenswrapper[28149]: I0313 13:13:36.965819 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 13 13:13:38.021048 master-0 kubenswrapper[28149]: I0313 13:13:38.020987 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 13 13:13:40.380582 master-0 kubenswrapper[28149]: I0313 13:13:40.380509 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 13 13:13:40.380582 master-0 kubenswrapper[28149]: I0313 13:13:40.380584 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 13 13:13:40.425164 master-0 kubenswrapper[28149]: I0313 13:13:40.425084 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 13 13:13:40.428241 master-0 kubenswrapper[28149]: I0313 13:13:40.427583 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 13 13:13:41.019079 master-0 kubenswrapper[28149]: I0313 13:13:41.019013 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 13 13:13:41.021624 master-0 kubenswrapper[28149]: I0313 13:13:41.021556 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 13 13:13:48.145731 master-0 kubenswrapper[28149]: I0313 13:13:48.145640 28149 generic.go:334] "Generic (PLEG): container finished" podID="05ef3745-1126-43ed-bc8b-f7be6477ff30" containerID="16f6f19db88f52b5f12bc43163f6123f34a41042b8b8ed2a748db89eb6839aee" exitCode=0 Mar 13 13:13:48.145731 master-0 kubenswrapper[28149]: I0313 13:13:48.145723 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xbkk4" event={"ID":"05ef3745-1126-43ed-bc8b-f7be6477ff30","Type":"ContainerDied","Data":"16f6f19db88f52b5f12bc43163f6123f34a41042b8b8ed2a748db89eb6839aee"} Mar 13 13:13:49.644921 master-0 kubenswrapper[28149]: I0313 13:13:49.644871 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:49.828953 master-0 kubenswrapper[28149]: I0313 13:13:49.828833 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-config-data\") pod \"05ef3745-1126-43ed-bc8b-f7be6477ff30\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " Mar 13 13:13:49.828953 master-0 kubenswrapper[28149]: I0313 13:13:49.828926 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-combined-ca-bundle\") pod \"05ef3745-1126-43ed-bc8b-f7be6477ff30\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " Mar 13 13:13:49.829256 master-0 kubenswrapper[28149]: I0313 13:13:49.829085 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5b7z\" (UniqueName: \"kubernetes.io/projected/05ef3745-1126-43ed-bc8b-f7be6477ff30-kube-api-access-b5b7z\") pod \"05ef3745-1126-43ed-bc8b-f7be6477ff30\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " Mar 13 13:13:49.829256 master-0 kubenswrapper[28149]: I0313 13:13:49.829244 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-scripts\") pod \"05ef3745-1126-43ed-bc8b-f7be6477ff30\" (UID: \"05ef3745-1126-43ed-bc8b-f7be6477ff30\") " Mar 13 13:13:49.833384 master-0 kubenswrapper[28149]: I0313 13:13:49.833331 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ef3745-1126-43ed-bc8b-f7be6477ff30-kube-api-access-b5b7z" (OuterVolumeSpecName: "kube-api-access-b5b7z") pod "05ef3745-1126-43ed-bc8b-f7be6477ff30" (UID: "05ef3745-1126-43ed-bc8b-f7be6477ff30"). InnerVolumeSpecName "kube-api-access-b5b7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:13:49.853988 master-0 kubenswrapper[28149]: I0313 13:13:49.853915 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-scripts" (OuterVolumeSpecName: "scripts") pod "05ef3745-1126-43ed-bc8b-f7be6477ff30" (UID: "05ef3745-1126-43ed-bc8b-f7be6477ff30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:49.858514 master-0 kubenswrapper[28149]: I0313 13:13:49.858443 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-config-data" (OuterVolumeSpecName: "config-data") pod "05ef3745-1126-43ed-bc8b-f7be6477ff30" (UID: "05ef3745-1126-43ed-bc8b-f7be6477ff30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:49.865904 master-0 kubenswrapper[28149]: I0313 13:13:49.865854 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05ef3745-1126-43ed-bc8b-f7be6477ff30" (UID: "05ef3745-1126-43ed-bc8b-f7be6477ff30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:13:49.980571 master-0 kubenswrapper[28149]: I0313 13:13:49.980506 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:49.980571 master-0 kubenswrapper[28149]: I0313 13:13:49.980560 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:49.980571 master-0 kubenswrapper[28149]: I0313 13:13:49.980577 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ef3745-1126-43ed-bc8b-f7be6477ff30-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:49.980905 master-0 kubenswrapper[28149]: I0313 13:13:49.980591 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5b7z\" (UniqueName: \"kubernetes.io/projected/05ef3745-1126-43ed-bc8b-f7be6477ff30-kube-api-access-b5b7z\") on node \"master-0\" DevicePath \"\"" Mar 13 13:13:50.170810 master-0 kubenswrapper[28149]: I0313 13:13:50.170666 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xbkk4" event={"ID":"05ef3745-1126-43ed-bc8b-f7be6477ff30","Type":"ContainerDied","Data":"3dbab30fa7253803bf3dff1411b0a21215f18ab494c947b858e33021243846cf"} Mar 13 13:13:50.170810 master-0 kubenswrapper[28149]: I0313 13:13:50.170719 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dbab30fa7253803bf3dff1411b0a21215f18ab494c947b858e33021243846cf" Mar 13 13:13:50.170810 master-0 kubenswrapper[28149]: I0313 13:13:50.170747 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xbkk4" Mar 13 13:13:50.351652 master-0 kubenswrapper[28149]: I0313 13:13:50.350938 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 13:13:50.351652 master-0 kubenswrapper[28149]: E0313 13:13:50.351659 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d4527d-7eb8-488f-bcc7-8c6bd2be3351" containerName="init" Mar 13 13:13:50.351956 master-0 kubenswrapper[28149]: I0313 13:13:50.351677 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d4527d-7eb8-488f-bcc7-8c6bd2be3351" containerName="init" Mar 13 13:13:50.351956 master-0 kubenswrapper[28149]: E0313 13:13:50.351750 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ef3745-1126-43ed-bc8b-f7be6477ff30" containerName="nova-cell0-conductor-db-sync" Mar 13 13:13:50.351956 master-0 kubenswrapper[28149]: I0313 13:13:50.351756 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ef3745-1126-43ed-bc8b-f7be6477ff30" containerName="nova-cell0-conductor-db-sync" Mar 13 13:13:50.351956 master-0 kubenswrapper[28149]: E0313 13:13:50.351774 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d4527d-7eb8-488f-bcc7-8c6bd2be3351" containerName="dnsmasq-dns" Mar 13 13:13:50.351956 master-0 kubenswrapper[28149]: I0313 13:13:50.351780 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d4527d-7eb8-488f-bcc7-8c6bd2be3351" containerName="dnsmasq-dns" Mar 13 13:13:50.352157 master-0 kubenswrapper[28149]: I0313 13:13:50.352027 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ef3745-1126-43ed-bc8b-f7be6477ff30" containerName="nova-cell0-conductor-db-sync" Mar 13 13:13:50.352157 master-0 kubenswrapper[28149]: I0313 13:13:50.352098 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d4527d-7eb8-488f-bcc7-8c6bd2be3351" containerName="dnsmasq-dns" Mar 13 13:13:50.353587 master-0 kubenswrapper[28149]: I0313 13:13:50.353001 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 13 13:13:50.393149 master-0 kubenswrapper[28149]: I0313 13:13:50.380123 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 13:13:50.393149 master-0 kubenswrapper[28149]: I0313 13:13:50.382418 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 13 13:13:50.489967 master-0 kubenswrapper[28149]: I0313 13:13:50.489776 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b54946b-2a71-4836-9cd3-962a2afb2746-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7b54946b-2a71-4836-9cd3-962a2afb2746\") " pod="openstack/nova-cell0-conductor-0" Mar 13 13:13:50.489967 master-0 kubenswrapper[28149]: I0313 13:13:50.489865 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b54946b-2a71-4836-9cd3-962a2afb2746-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7b54946b-2a71-4836-9cd3-962a2afb2746\") " pod="openstack/nova-cell0-conductor-0" Mar 13 13:13:50.490266 master-0 kubenswrapper[28149]: I0313 13:13:50.490109 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss78l\" (UniqueName: \"kubernetes.io/projected/7b54946b-2a71-4836-9cd3-962a2afb2746-kube-api-access-ss78l\") pod \"nova-cell0-conductor-0\" (UID: \"7b54946b-2a71-4836-9cd3-962a2afb2746\") " pod="openstack/nova-cell0-conductor-0" Mar 13 13:13:50.593463 master-0 kubenswrapper[28149]: I0313 13:13:50.593321 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss78l\" (UniqueName: \"kubernetes.io/projected/7b54946b-2a71-4836-9cd3-962a2afb2746-kube-api-access-ss78l\") pod \"nova-cell0-conductor-0\" (UID: \"7b54946b-2a71-4836-9cd3-962a2afb2746\") " pod="openstack/nova-cell0-conductor-0" Mar 13 13:13:50.593763 master-0 kubenswrapper[28149]: I0313 13:13:50.593511 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b54946b-2a71-4836-9cd3-962a2afb2746-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7b54946b-2a71-4836-9cd3-962a2afb2746\") " pod="openstack/nova-cell0-conductor-0" Mar 13 13:13:50.593763 master-0 kubenswrapper[28149]: I0313 13:13:50.593556 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b54946b-2a71-4836-9cd3-962a2afb2746-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7b54946b-2a71-4836-9cd3-962a2afb2746\") " pod="openstack/nova-cell0-conductor-0" Mar 13 13:13:50.602953 master-0 kubenswrapper[28149]: I0313 13:13:50.602872 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b54946b-2a71-4836-9cd3-962a2afb2746-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7b54946b-2a71-4836-9cd3-962a2afb2746\") " pod="openstack/nova-cell0-conductor-0" Mar 13 13:13:50.603293 master-0 kubenswrapper[28149]: I0313 13:13:50.603015 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b54946b-2a71-4836-9cd3-962a2afb2746-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7b54946b-2a71-4836-9cd3-962a2afb2746\") " pod="openstack/nova-cell0-conductor-0" Mar 13 13:13:50.613588 master-0 kubenswrapper[28149]: I0313 13:13:50.613548 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss78l\" (UniqueName: \"kubernetes.io/projected/7b54946b-2a71-4836-9cd3-962a2afb2746-kube-api-access-ss78l\") pod \"nova-cell0-conductor-0\" (UID: \"7b54946b-2a71-4836-9cd3-962a2afb2746\") " pod="openstack/nova-cell0-conductor-0" Mar 13 13:13:50.685334 master-0 kubenswrapper[28149]: I0313 13:13:50.685271 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 13 13:13:51.290437 master-0 kubenswrapper[28149]: W0313 13:13:51.290396 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b54946b_2a71_4836_9cd3_962a2afb2746.slice/crio-ea80c60c69922acef97885315cd1f4b45ec1c8b8d67ed919ea2c7876c08a151a WatchSource:0}: Error finding container ea80c60c69922acef97885315cd1f4b45ec1c8b8d67ed919ea2c7876c08a151a: Status 404 returned error can't find the container with id ea80c60c69922acef97885315cd1f4b45ec1c8b8d67ed919ea2c7876c08a151a Mar 13 13:13:51.305268 master-0 kubenswrapper[28149]: I0313 13:13:51.305206 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 13:13:52.242306 master-0 kubenswrapper[28149]: I0313 13:13:52.242253 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7b54946b-2a71-4836-9cd3-962a2afb2746","Type":"ContainerStarted","Data":"16db6dda8c94fbba06d83291ccdbde0c3c13028797be2e62a03da22cbdd379c8"} Mar 13 13:13:52.242968 master-0 kubenswrapper[28149]: I0313 13:13:52.242949 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 13 13:13:52.243073 master-0 kubenswrapper[28149]: I0313 13:13:52.243057 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7b54946b-2a71-4836-9cd3-962a2afb2746","Type":"ContainerStarted","Data":"ea80c60c69922acef97885315cd1f4b45ec1c8b8d67ed919ea2c7876c08a151a"} Mar 13 13:13:52.273922 master-0 kubenswrapper[28149]: I0313 13:13:52.273800 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.273776618 podStartE2EDuration="2.273776618s" podCreationTimestamp="2026-03-13 13:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:13:52.260568523 +0000 UTC m=+1205.914033682" watchObservedRunningTime="2026-03-13 13:13:52.273776618 +0000 UTC m=+1205.927241777" Mar 13 13:14:00.736468 master-0 kubenswrapper[28149]: I0313 13:14:00.736262 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 13 13:14:01.411168 master-0 kubenswrapper[28149]: I0313 13:14:01.408910 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-vm54v"] Mar 13 13:14:01.411168 master-0 kubenswrapper[28149]: I0313 13:14:01.410847 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:01.416684 master-0 kubenswrapper[28149]: I0313 13:14:01.413812 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 13 13:14:01.416684 master-0 kubenswrapper[28149]: I0313 13:14:01.414230 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 13 13:14:01.446893 master-0 kubenswrapper[28149]: I0313 13:14:01.446411 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vm54v"] Mar 13 13:14:01.777651 master-0 kubenswrapper[28149]: I0313 13:14:01.777209 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 13 13:14:01.782200 master-0 kubenswrapper[28149]: I0313 13:14:01.779378 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 13:14:01.787599 master-0 kubenswrapper[28149]: I0313 13:14:01.787554 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Mar 13 13:14:01.793030 master-0 kubenswrapper[28149]: I0313 13:14:01.792393 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 13 13:14:01.846172 master-0 kubenswrapper[28149]: I0313 13:14:01.832720 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vm54v\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:01.846172 master-0 kubenswrapper[28149]: I0313 13:14:01.832839 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghwwx\" (UniqueName: \"kubernetes.io/projected/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-kube-api-access-ghwwx\") pod \"nova-cell0-cell-mapping-vm54v\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:01.846172 master-0 kubenswrapper[28149]: I0313 13:14:01.832873 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-scripts\") pod \"nova-cell0-cell-mapping-vm54v\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:01.846172 master-0 kubenswrapper[28149]: I0313 13:14:01.832917 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-config-data\") pod \"nova-cell0-cell-mapping-vm54v\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:01.934545 master-0 kubenswrapper[28149]: I0313 13:14:01.934492 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-config-data\") pod \"nova-cell0-cell-mapping-vm54v\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:01.934545 master-0 kubenswrapper[28149]: I0313 13:14:01.934551 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e0edeb-8711-49d6-9096-2b3f01751b4b-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"66e0edeb-8711-49d6-9096-2b3f01751b4b\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 13:14:01.934824 master-0 kubenswrapper[28149]: I0313 13:14:01.934661 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e0edeb-8711-49d6-9096-2b3f01751b4b-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"66e0edeb-8711-49d6-9096-2b3f01751b4b\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 13:14:01.934824 master-0 kubenswrapper[28149]: I0313 13:14:01.934689 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9jj\" (UniqueName: \"kubernetes.io/projected/66e0edeb-8711-49d6-9096-2b3f01751b4b-kube-api-access-4f9jj\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"66e0edeb-8711-49d6-9096-2b3f01751b4b\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 13:14:01.934824 master-0 kubenswrapper[28149]: I0313 13:14:01.934776 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vm54v\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:01.935051 master-0 kubenswrapper[28149]: I0313 13:14:01.934946 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghwwx\" (UniqueName: \"kubernetes.io/projected/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-kube-api-access-ghwwx\") pod \"nova-cell0-cell-mapping-vm54v\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:01.935303 master-0 kubenswrapper[28149]: I0313 13:14:01.935268 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-scripts\") pod \"nova-cell0-cell-mapping-vm54v\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:01.943257 master-0 kubenswrapper[28149]: I0313 13:14:01.942953 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vm54v\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:01.943257 master-0 kubenswrapper[28149]: I0313 13:14:01.943184 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-config-data\") pod \"nova-cell0-cell-mapping-vm54v\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:01.943534 master-0 kubenswrapper[28149]: I0313 13:14:01.943381 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-scripts\") pod \"nova-cell0-cell-mapping-vm54v\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:01.959131 master-0 kubenswrapper[28149]: I0313 13:14:01.958923 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghwwx\" (UniqueName: \"kubernetes.io/projected/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-kube-api-access-ghwwx\") pod \"nova-cell0-cell-mapping-vm54v\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:02.039648 master-0 kubenswrapper[28149]: I0313 13:14:02.038180 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e0edeb-8711-49d6-9096-2b3f01751b4b-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"66e0edeb-8711-49d6-9096-2b3f01751b4b\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 13:14:02.039648 master-0 kubenswrapper[28149]: I0313 13:14:02.038346 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e0edeb-8711-49d6-9096-2b3f01751b4b-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"66e0edeb-8711-49d6-9096-2b3f01751b4b\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 13:14:02.039648 master-0 kubenswrapper[28149]: I0313 13:14:02.038392 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f9jj\" (UniqueName: \"kubernetes.io/projected/66e0edeb-8711-49d6-9096-2b3f01751b4b-kube-api-access-4f9jj\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"66e0edeb-8711-49d6-9096-2b3f01751b4b\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 13:14:02.039648 master-0 kubenswrapper[28149]: I0313 13:14:02.039281 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:02.053166 master-0 kubenswrapper[28149]: I0313 13:14:02.049798 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e0edeb-8711-49d6-9096-2b3f01751b4b-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"66e0edeb-8711-49d6-9096-2b3f01751b4b\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 13:14:02.069357 master-0 kubenswrapper[28149]: I0313 13:14:02.069194 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e0edeb-8711-49d6-9096-2b3f01751b4b-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"66e0edeb-8711-49d6-9096-2b3f01751b4b\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 13:14:02.070742 master-0 kubenswrapper[28149]: I0313 13:14:02.070698 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f9jj\" (UniqueName: \"kubernetes.io/projected/66e0edeb-8711-49d6-9096-2b3f01751b4b-kube-api-access-4f9jj\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"66e0edeb-8711-49d6-9096-2b3f01751b4b\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 13:14:02.309826 master-0 kubenswrapper[28149]: I0313 13:14:02.309688 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 13:14:02.314683 master-0 kubenswrapper[28149]: I0313 13:14:02.314650 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 13:14:02.319912 master-0 kubenswrapper[28149]: I0313 13:14:02.319880 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 13:14:02.332056 master-0 kubenswrapper[28149]: I0313 13:14:02.332002 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 13:14:02.453431 master-0 kubenswrapper[28149]: I0313 13:14:02.453255 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 13:14:02.455704 master-0 kubenswrapper[28149]: I0313 13:14:02.455662 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 13:14:02.475877 master-0 kubenswrapper[28149]: I0313 13:14:02.471958 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 13 13:14:02.491749 master-0 kubenswrapper[28149]: I0313 13:14:02.491525 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:14:02.530233 master-0 kubenswrapper[28149]: I0313 13:14:02.515399 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkfbz\" (UniqueName: \"kubernetes.io/projected/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-kube-api-access-rkfbz\") pod \"nova-api-0\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " pod="openstack/nova-api-0" Mar 13 13:14:02.530233 master-0 kubenswrapper[28149]: I0313 13:14:02.515485 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-logs\") pod \"nova-api-0\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " pod="openstack/nova-api-0" Mar 13 13:14:02.530233 master-0 kubenswrapper[28149]: I0313 13:14:02.515554 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " pod="openstack/nova-api-0" Mar 13 13:14:02.530233 master-0 kubenswrapper[28149]: I0313 13:14:02.515585 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-config-data\") pod \"nova-api-0\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " pod="openstack/nova-api-0" Mar 13 13:14:02.563368 master-0 kubenswrapper[28149]: I0313 13:14:02.563206 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 13:14:02.623167 master-0 kubenswrapper[28149]: I0313 13:14:02.617696 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " pod="openstack/nova-api-0" Mar 13 13:14:02.623167 master-0 kubenswrapper[28149]: I0313 13:14:02.617756 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-config-data\") pod \"nova-api-0\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " pod="openstack/nova-api-0" Mar 13 13:14:02.623167 master-0 kubenswrapper[28149]: I0313 13:14:02.617793 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-config-data\") pod \"nova-scheduler-0\" (UID: \"e422fd27-d0eb-49f9-a05d-7fd39eb9fada\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:02.623167 master-0 kubenswrapper[28149]: I0313 13:14:02.619059 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbhw6\" (UniqueName: \"kubernetes.io/projected/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-kube-api-access-dbhw6\") pod \"nova-scheduler-0\" (UID: \"e422fd27-d0eb-49f9-a05d-7fd39eb9fada\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:02.623167 master-0 kubenswrapper[28149]: I0313 13:14:02.619396 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e422fd27-d0eb-49f9-a05d-7fd39eb9fada\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:02.623167 master-0 kubenswrapper[28149]: I0313 13:14:02.619645 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkfbz\" (UniqueName: \"kubernetes.io/projected/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-kube-api-access-rkfbz\") pod \"nova-api-0\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " pod="openstack/nova-api-0" Mar 13 13:14:02.623167 master-0 kubenswrapper[28149]: I0313 13:14:02.619803 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-logs\") pod \"nova-api-0\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " pod="openstack/nova-api-0" Mar 13 13:14:02.623167 master-0 kubenswrapper[28149]: I0313 13:14:02.621027 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-logs\") pod \"nova-api-0\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " pod="openstack/nova-api-0" Mar 13 13:14:02.634784 master-0 kubenswrapper[28149]: I0313 13:14:02.634704 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " pod="openstack/nova-api-0" Mar 13 13:14:02.635295 master-0 kubenswrapper[28149]: I0313 13:14:02.635258 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-config-data\") pod \"nova-api-0\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " pod="openstack/nova-api-0" Mar 13 13:14:02.650176 master-0 kubenswrapper[28149]: I0313 13:14:02.647480 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkfbz\" (UniqueName: \"kubernetes.io/projected/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-kube-api-access-rkfbz\") pod \"nova-api-0\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " pod="openstack/nova-api-0" Mar 13 13:14:02.740438 master-0 kubenswrapper[28149]: I0313 13:14:02.740346 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-config-data\") pod \"nova-scheduler-0\" (UID: \"e422fd27-d0eb-49f9-a05d-7fd39eb9fada\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:02.750290 master-0 kubenswrapper[28149]: I0313 13:14:02.750240 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbhw6\" (UniqueName: \"kubernetes.io/projected/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-kube-api-access-dbhw6\") pod \"nova-scheduler-0\" (UID: \"e422fd27-d0eb-49f9-a05d-7fd39eb9fada\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:02.750855 master-0 kubenswrapper[28149]: I0313 13:14:02.750726 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e422fd27-d0eb-49f9-a05d-7fd39eb9fada\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:02.766796 master-0 kubenswrapper[28149]: I0313 13:14:02.759794 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-config-data\") pod \"nova-scheduler-0\" (UID: \"e422fd27-d0eb-49f9-a05d-7fd39eb9fada\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:02.766796 master-0 kubenswrapper[28149]: I0313 13:14:02.762059 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e422fd27-d0eb-49f9-a05d-7fd39eb9fada\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:02.986774 master-0 kubenswrapper[28149]: I0313 13:14:02.976818 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 13:14:03.066610 master-0 kubenswrapper[28149]: I0313 13:14:03.066565 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbhw6\" (UniqueName: \"kubernetes.io/projected/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-kube-api-access-dbhw6\") pod \"nova-scheduler-0\" (UID: \"e422fd27-d0eb-49f9-a05d-7fd39eb9fada\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:03.114168 master-0 kubenswrapper[28149]: I0313 13:14:03.114108 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 13:14:03.206653 master-0 kubenswrapper[28149]: I0313 13:14:03.197931 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 13 13:14:03.592643 master-0 kubenswrapper[28149]: I0313 13:14:03.592579 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 13:14:03.611573 master-0 kubenswrapper[28149]: I0313 13:14:03.602896 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 13 13:14:03.671552 master-0 kubenswrapper[28149]: I0313 13:14:03.671345 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vm54v" event={"ID":"f6c1cc46-fe3f-4495-be1d-5324d25d39ae","Type":"ContainerStarted","Data":"bd7eb3988fa708cc747dcd913262473768da85c0dc17da03a3a13188c599a0a0"} Mar 13 13:14:03.675699 master-0 kubenswrapper[28149]: I0313 13:14:03.675671 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 13:14:03.690969 master-0 kubenswrapper[28149]: I0313 13:14:03.690889 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 13:14:03.694301 master-0 kubenswrapper[28149]: I0313 13:14:03.694263 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:03.698312 master-0 kubenswrapper[28149]: I0313 13:14:03.698271 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 13 13:14:03.703986 master-0 kubenswrapper[28149]: I0313 13:14:03.703946 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 13:14:03.719808 master-0 kubenswrapper[28149]: I0313 13:14:03.719526 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vm54v"] Mar 13 13:14:03.750111 master-0 kubenswrapper[28149]: I0313 13:14:03.747862 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9c9ccb7c-grqhr"] Mar 13 13:14:03.766395 master-0 kubenswrapper[28149]: I0313 13:14:03.753846 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfb097f-c7c9-4933-875e-ff168351b070-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfb097f-c7c9-4933-875e-ff168351b070\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:03.766395 master-0 kubenswrapper[28149]: I0313 13:14:03.753937 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c50291-fa01-44ba-8316-2cff471a4af4-config-data\") pod \"nova-metadata-0\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " pod="openstack/nova-metadata-0" Mar 13 13:14:03.766395 master-0 kubenswrapper[28149]: I0313 13:14:03.754011 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04c50291-fa01-44ba-8316-2cff471a4af4-logs\") pod \"nova-metadata-0\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " pod="openstack/nova-metadata-0" Mar 13 13:14:03.766395 master-0 kubenswrapper[28149]: I0313 13:14:03.754032 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25sbs\" (UniqueName: \"kubernetes.io/projected/04c50291-fa01-44ba-8316-2cff471a4af4-kube-api-access-25sbs\") pod \"nova-metadata-0\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " pod="openstack/nova-metadata-0" Mar 13 13:14:03.766395 master-0 kubenswrapper[28149]: I0313 13:14:03.754209 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dfb097f-c7c9-4933-875e-ff168351b070-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfb097f-c7c9-4933-875e-ff168351b070\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:03.766395 master-0 kubenswrapper[28149]: I0313 13:14:03.754430 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c50291-fa01-44ba-8316-2cff471a4af4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " pod="openstack/nova-metadata-0" Mar 13 13:14:03.766395 master-0 kubenswrapper[28149]: I0313 13:14:03.754489 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddsx8\" (UniqueName: \"kubernetes.io/projected/9dfb097f-c7c9-4933-875e-ff168351b070-kube-api-access-ddsx8\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfb097f-c7c9-4933-875e-ff168351b070\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:03.766395 master-0 kubenswrapper[28149]: I0313 13:14:03.765331 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.806766 master-0 kubenswrapper[28149]: I0313 13:14:03.806731 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9c9ccb7c-grqhr"] Mar 13 13:14:03.856658 master-0 kubenswrapper[28149]: I0313 13:14:03.856574 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.856658 master-0 kubenswrapper[28149]: I0313 13:14:03.856641 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-config\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.856945 master-0 kubenswrapper[28149]: I0313 13:14:03.856704 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c50291-fa01-44ba-8316-2cff471a4af4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " pod="openstack/nova-metadata-0" Mar 13 13:14:03.856945 master-0 kubenswrapper[28149]: I0313 13:14:03.856738 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddsx8\" (UniqueName: \"kubernetes.io/projected/9dfb097f-c7c9-4933-875e-ff168351b070-kube-api-access-ddsx8\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfb097f-c7c9-4933-875e-ff168351b070\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:03.856945 master-0 kubenswrapper[28149]: I0313 13:14:03.856807 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-dns-svc\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.856945 master-0 kubenswrapper[28149]: I0313 13:14:03.856828 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.856945 master-0 kubenswrapper[28149]: I0313 13:14:03.856874 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.856945 master-0 kubenswrapper[28149]: I0313 13:14:03.856900 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfb097f-c7c9-4933-875e-ff168351b070-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfb097f-c7c9-4933-875e-ff168351b070\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:03.856945 master-0 kubenswrapper[28149]: I0313 13:14:03.856934 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c50291-fa01-44ba-8316-2cff471a4af4-config-data\") pod \"nova-metadata-0\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " pod="openstack/nova-metadata-0" Mar 13 13:14:03.857434 master-0 kubenswrapper[28149]: I0313 13:14:03.856985 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwlmg\" (UniqueName: \"kubernetes.io/projected/98ef9c97-a395-412b-b84e-4bdfc2be1e17-kube-api-access-jwlmg\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.857434 master-0 kubenswrapper[28149]: I0313 13:14:03.857221 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04c50291-fa01-44ba-8316-2cff471a4af4-logs\") pod \"nova-metadata-0\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " pod="openstack/nova-metadata-0" Mar 13 13:14:03.857434 master-0 kubenswrapper[28149]: I0313 13:14:03.857252 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25sbs\" (UniqueName: \"kubernetes.io/projected/04c50291-fa01-44ba-8316-2cff471a4af4-kube-api-access-25sbs\") pod \"nova-metadata-0\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " pod="openstack/nova-metadata-0" Mar 13 13:14:03.857434 master-0 kubenswrapper[28149]: I0313 13:14:03.857282 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dfb097f-c7c9-4933-875e-ff168351b070-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfb097f-c7c9-4933-875e-ff168351b070\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:03.859590 master-0 kubenswrapper[28149]: I0313 13:14:03.859513 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04c50291-fa01-44ba-8316-2cff471a4af4-logs\") pod \"nova-metadata-0\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " pod="openstack/nova-metadata-0" Mar 13 13:14:03.863983 master-0 kubenswrapper[28149]: I0313 13:14:03.863850 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c50291-fa01-44ba-8316-2cff471a4af4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " pod="openstack/nova-metadata-0" Mar 13 13:14:03.877124 master-0 kubenswrapper[28149]: I0313 13:14:03.877024 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dfb097f-c7c9-4933-875e-ff168351b070-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfb097f-c7c9-4933-875e-ff168351b070\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:03.896081 master-0 kubenswrapper[28149]: I0313 13:14:03.894709 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25sbs\" (UniqueName: \"kubernetes.io/projected/04c50291-fa01-44ba-8316-2cff471a4af4-kube-api-access-25sbs\") pod \"nova-metadata-0\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " pod="openstack/nova-metadata-0" Mar 13 13:14:03.902920 master-0 kubenswrapper[28149]: I0313 13:14:03.902856 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfb097f-c7c9-4933-875e-ff168351b070-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfb097f-c7c9-4933-875e-ff168351b070\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:03.902920 master-0 kubenswrapper[28149]: I0313 13:14:03.902885 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddsx8\" (UniqueName: \"kubernetes.io/projected/9dfb097f-c7c9-4933-875e-ff168351b070-kube-api-access-ddsx8\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfb097f-c7c9-4933-875e-ff168351b070\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:03.903332 master-0 kubenswrapper[28149]: I0313 13:14:03.903076 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c50291-fa01-44ba-8316-2cff471a4af4-config-data\") pod \"nova-metadata-0\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " pod="openstack/nova-metadata-0" Mar 13 13:14:03.968184 master-0 kubenswrapper[28149]: I0313 13:14:03.967771 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 13:14:03.974510 master-0 kubenswrapper[28149]: I0313 13:14:03.971363 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwlmg\" (UniqueName: \"kubernetes.io/projected/98ef9c97-a395-412b-b84e-4bdfc2be1e17-kube-api-access-jwlmg\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.974510 master-0 kubenswrapper[28149]: I0313 13:14:03.971605 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.974510 master-0 kubenswrapper[28149]: I0313 13:14:03.971635 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-config\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.974510 master-0 kubenswrapper[28149]: I0313 13:14:03.971873 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-dns-svc\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.974510 master-0 kubenswrapper[28149]: I0313 13:14:03.971913 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.974510 master-0 kubenswrapper[28149]: I0313 13:14:03.971995 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.974510 master-0 kubenswrapper[28149]: I0313 13:14:03.973081 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-config\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.974510 master-0 kubenswrapper[28149]: I0313 13:14:03.973387 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.974510 master-0 kubenswrapper[28149]: I0313 13:14:03.974034 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-dns-svc\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.974510 master-0 kubenswrapper[28149]: I0313 13:14:03.974261 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.975103 master-0 kubenswrapper[28149]: I0313 13:14:03.974656 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:03.992165 master-0 kubenswrapper[28149]: I0313 13:14:03.987058 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 13 13:14:03.997177 master-0 kubenswrapper[28149]: I0313 13:14:03.996954 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwlmg\" (UniqueName: \"kubernetes.io/projected/98ef9c97-a395-412b-b84e-4bdfc2be1e17-kube-api-access-jwlmg\") pod \"dnsmasq-dns-5c9c9ccb7c-grqhr\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:04.349512 master-0 kubenswrapper[28149]: I0313 13:14:04.039973 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:04.349512 master-0 kubenswrapper[28149]: I0313 13:14:04.334621 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:04.468093 master-0 kubenswrapper[28149]: I0313 13:14:04.464897 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:14:04.487108 master-0 kubenswrapper[28149]: I0313 13:14:04.484035 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 13:14:04.757094 master-0 kubenswrapper[28149]: I0313 13:14:04.757042 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-99x6g"] Mar 13 13:14:04.760450 master-0 kubenswrapper[28149]: I0313 13:14:04.760386 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e422fd27-d0eb-49f9-a05d-7fd39eb9fada","Type":"ContainerStarted","Data":"0e75dcc278761ae18aa52969fcfb6f68f0e04e9d456099c5888207a6f6e15da8"} Mar 13 13:14:04.762453 master-0 kubenswrapper[28149]: I0313 13:14:04.762420 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vm54v" event={"ID":"f6c1cc46-fe3f-4495-be1d-5324d25d39ae","Type":"ContainerStarted","Data":"4dcaefba9d65d8d6e5fddbdd85aa196e1f96c691a3f119b0147e33418636f60b"} Mar 13 13:14:04.762614 master-0 kubenswrapper[28149]: I0313 13:14:04.761948 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:04.765989 master-0 kubenswrapper[28149]: I0313 13:14:04.765953 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 13 13:14:04.766176 master-0 kubenswrapper[28149]: I0313 13:14:04.766158 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 13 13:14:04.766917 master-0 kubenswrapper[28149]: W0313 13:14:04.766880 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec7cd004_beaf_4191_bb66_90f3adf2c8b5.slice/crio-eed6e14967aa4c0d4bd79fabdb5bddec045b4a9ac619435ba55c33399e59ac2f WatchSource:0}: Error finding container eed6e14967aa4c0d4bd79fabdb5bddec045b4a9ac619435ba55c33399e59ac2f: Status 404 returned error can't find the container with id eed6e14967aa4c0d4bd79fabdb5bddec045b4a9ac619435ba55c33399e59ac2f Mar 13 13:14:04.767820 master-0 kubenswrapper[28149]: I0313 13:14:04.767779 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"66e0edeb-8711-49d6-9096-2b3f01751b4b","Type":"ContainerStarted","Data":"1cc47cf6ab0bb7f147c6e6565b6c14808d69543161b9c317f6029863594434e3"} Mar 13 13:14:04.779634 master-0 kubenswrapper[28149]: I0313 13:14:04.772320 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-99x6g"] Mar 13 13:14:04.791494 master-0 kubenswrapper[28149]: I0313 13:14:04.787124 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-vm54v" podStartSLOduration=3.787091937 podStartE2EDuration="3.787091937s" podCreationTimestamp="2026-03-13 13:14:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:14:04.782308639 +0000 UTC m=+1218.435773798" watchObservedRunningTime="2026-03-13 13:14:04.787091937 +0000 UTC m=+1218.440557096" Mar 13 13:14:04.882389 master-0 kubenswrapper[28149]: I0313 13:14:04.881436 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-99x6g\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:04.882389 master-0 kubenswrapper[28149]: I0313 13:14:04.881565 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7qs4\" (UniqueName: \"kubernetes.io/projected/60fb7814-d2a1-47d5-9d6c-559bc67a2442-kube-api-access-g7qs4\") pod \"nova-cell1-conductor-db-sync-99x6g\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:04.882389 master-0 kubenswrapper[28149]: I0313 13:14:04.881642 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-scripts\") pod \"nova-cell1-conductor-db-sync-99x6g\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:04.886231 master-0 kubenswrapper[28149]: I0313 13:14:04.884389 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-config-data\") pod \"nova-cell1-conductor-db-sync-99x6g\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:04.989853 master-0 kubenswrapper[28149]: I0313 13:14:04.987495 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-99x6g\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:04.989853 master-0 kubenswrapper[28149]: I0313 13:14:04.987626 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7qs4\" (UniqueName: \"kubernetes.io/projected/60fb7814-d2a1-47d5-9d6c-559bc67a2442-kube-api-access-g7qs4\") pod \"nova-cell1-conductor-db-sync-99x6g\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:04.989853 master-0 kubenswrapper[28149]: I0313 13:14:04.987719 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-scripts\") pod \"nova-cell1-conductor-db-sync-99x6g\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:04.989853 master-0 kubenswrapper[28149]: I0313 13:14:04.987788 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-config-data\") pod \"nova-cell1-conductor-db-sync-99x6g\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:04.997267 master-0 kubenswrapper[28149]: I0313 13:14:04.997168 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-config-data\") pod \"nova-cell1-conductor-db-sync-99x6g\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:05.005978 master-0 kubenswrapper[28149]: I0313 13:14:04.998740 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-scripts\") pod \"nova-cell1-conductor-db-sync-99x6g\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:05.019994 master-0 kubenswrapper[28149]: I0313 13:14:05.018544 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-99x6g\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:05.244895 master-0 kubenswrapper[28149]: I0313 13:14:05.243734 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 13:14:05.261285 master-0 kubenswrapper[28149]: I0313 13:14:05.260366 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7qs4\" (UniqueName: \"kubernetes.io/projected/60fb7814-d2a1-47d5-9d6c-559bc67a2442-kube-api-access-g7qs4\") pod \"nova-cell1-conductor-db-sync-99x6g\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:05.405964 master-0 kubenswrapper[28149]: I0313 13:14:05.405833 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:05.676790 master-0 kubenswrapper[28149]: I0313 13:14:05.676748 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9c9ccb7c-grqhr"] Mar 13 13:14:06.037244 master-0 kubenswrapper[28149]: I0313 13:14:06.035358 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04c50291-fa01-44ba-8316-2cff471a4af4","Type":"ContainerStarted","Data":"3927baa110bb4a7343a1581680760c3fa9bdd2b369b7448e5e8b3be52bc829fc"} Mar 13 13:14:06.041521 master-0 kubenswrapper[28149]: I0313 13:14:06.041458 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" event={"ID":"98ef9c97-a395-412b-b84e-4bdfc2be1e17","Type":"ContainerStarted","Data":"a6466c506ec304bbb680cab72507a2687c23c638cc257da49e87f963648fe84d"} Mar 13 13:14:06.064573 master-0 kubenswrapper[28149]: I0313 13:14:06.064507 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec7cd004-beaf-4191-bb66-90f3adf2c8b5","Type":"ContainerStarted","Data":"eed6e14967aa4c0d4bd79fabdb5bddec045b4a9ac619435ba55c33399e59ac2f"} Mar 13 13:14:06.081460 master-0 kubenswrapper[28149]: I0313 13:14:06.074931 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 13:14:06.324047 master-0 kubenswrapper[28149]: I0313 13:14:06.321465 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-99x6g"] Mar 13 13:14:07.072949 master-0 kubenswrapper[28149]: E0313 13:14:07.072889 28149 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98ef9c97_a395_412b_b84e_4bdfc2be1e17.slice/crio-3d146a842891051e150bef491ce987f74e4186fb94ca79c8e874679d4a0eac0a.scope\": RecentStats: unable to find data in memory cache]" Mar 13 13:14:07.085377 master-0 kubenswrapper[28149]: I0313 13:14:07.085325 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9dfb097f-c7c9-4933-875e-ff168351b070","Type":"ContainerStarted","Data":"489190caa09d4e2be93b8a6fb64992761cfc56570ab6a2b805b7f3f4e90e0545"} Mar 13 13:14:07.097107 master-0 kubenswrapper[28149]: I0313 13:14:07.096751 28149 generic.go:334] "Generic (PLEG): container finished" podID="98ef9c97-a395-412b-b84e-4bdfc2be1e17" containerID="3d146a842891051e150bef491ce987f74e4186fb94ca79c8e874679d4a0eac0a" exitCode=0 Mar 13 13:14:07.097107 master-0 kubenswrapper[28149]: I0313 13:14:07.096864 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" event={"ID":"98ef9c97-a395-412b-b84e-4bdfc2be1e17","Type":"ContainerDied","Data":"3d146a842891051e150bef491ce987f74e4186fb94ca79c8e874679d4a0eac0a"} Mar 13 13:14:07.108097 master-0 kubenswrapper[28149]: I0313 13:14:07.108033 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-99x6g" event={"ID":"60fb7814-d2a1-47d5-9d6c-559bc67a2442","Type":"ContainerStarted","Data":"cf419a9daadaf0931eeed932fe0df14934a60ba615c52db31ef3c43500c57a5a"} Mar 13 13:14:07.108097 master-0 kubenswrapper[28149]: I0313 13:14:07.108088 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-99x6g" event={"ID":"60fb7814-d2a1-47d5-9d6c-559bc67a2442","Type":"ContainerStarted","Data":"1fd85d8213855d54af876fbdeb293514d9005ae5c709b8dd6582d3a15db96fe9"} Mar 13 13:14:07.502235 master-0 kubenswrapper[28149]: I0313 13:14:07.499777 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-99x6g" podStartSLOduration=3.499754756 podStartE2EDuration="3.499754756s" podCreationTimestamp="2026-03-13 13:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:14:07.492473409 +0000 UTC m=+1221.145938578" watchObservedRunningTime="2026-03-13 13:14:07.499754756 +0000 UTC m=+1221.153219915" Mar 13 13:14:08.794696 master-0 kubenswrapper[28149]: I0313 13:14:08.791546 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 13:14:08.812190 master-0 kubenswrapper[28149]: I0313 13:14:08.811819 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 13:14:13.576669 master-0 kubenswrapper[28149]: I0313 13:14:13.576609 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" event={"ID":"98ef9c97-a395-412b-b84e-4bdfc2be1e17","Type":"ContainerStarted","Data":"9a061918dc7dfbdb83b8b4351db3d92bb87549bb4234ae9d6238db27016b09e5"} Mar 13 13:14:13.578767 master-0 kubenswrapper[28149]: I0313 13:14:13.578709 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:13.584026 master-0 kubenswrapper[28149]: I0313 13:14:13.583979 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec7cd004-beaf-4191-bb66-90f3adf2c8b5","Type":"ContainerStarted","Data":"a45b7dffc687821ce6c7d6ded423c455d269ce9e4d2cc2704c2eeb371169df28"} Mar 13 13:14:13.584026 master-0 kubenswrapper[28149]: I0313 13:14:13.584031 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec7cd004-beaf-4191-bb66-90f3adf2c8b5","Type":"ContainerStarted","Data":"85e519ba126df675a0a525ca5713d2c9a49f536033b4155ff638ce97ea0e4e2d"} Mar 13 13:14:13.589070 master-0 kubenswrapper[28149]: I0313 13:14:13.588922 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="04c50291-fa01-44ba-8316-2cff471a4af4" containerName="nova-metadata-log" containerID="cri-o://addeb280f71ed2a311858abaf3c9bd138eb75417378585b9369064f1d9ecbeed" gracePeriod=30 Mar 13 13:14:13.589070 master-0 kubenswrapper[28149]: I0313 13:14:13.588992 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="04c50291-fa01-44ba-8316-2cff471a4af4" containerName="nova-metadata-metadata" containerID="cri-o://ed1c12f08e73abe9a91a51f4a62cbe363977c4f51bae3cddc069f965ea1d9e8d" gracePeriod=30 Mar 13 13:14:13.589801 master-0 kubenswrapper[28149]: I0313 13:14:13.588853 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04c50291-fa01-44ba-8316-2cff471a4af4","Type":"ContainerStarted","Data":"ed1c12f08e73abe9a91a51f4a62cbe363977c4f51bae3cddc069f965ea1d9e8d"} Mar 13 13:14:13.590015 master-0 kubenswrapper[28149]: I0313 13:14:13.589971 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04c50291-fa01-44ba-8316-2cff471a4af4","Type":"ContainerStarted","Data":"addeb280f71ed2a311858abaf3c9bd138eb75417378585b9369064f1d9ecbeed"} Mar 13 13:14:13.593772 master-0 kubenswrapper[28149]: I0313 13:14:13.593740 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e422fd27-d0eb-49f9-a05d-7fd39eb9fada","Type":"ContainerStarted","Data":"0b158de554ea1b22d2a6593f4ed06ad111e825d9b3b670395ed04447bafa5c3d"} Mar 13 13:14:13.631408 master-0 kubenswrapper[28149]: I0313 13:14:13.631348 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9dfb097f-c7c9-4933-875e-ff168351b070","Type":"ContainerStarted","Data":"88ce2db7eb86df908f059314962bd0fca85fb2a4f9e3ac09caff5d81aafafac5"} Mar 13 13:14:13.632240 master-0 kubenswrapper[28149]: I0313 13:14:13.632201 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="9dfb097f-c7c9-4933-875e-ff168351b070" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://88ce2db7eb86df908f059314962bd0fca85fb2a4f9e3ac09caff5d81aafafac5" gracePeriod=30 Mar 13 13:14:13.941861 master-0 kubenswrapper[28149]: I0313 13:14:13.941672 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" podStartSLOduration=10.941648194 podStartE2EDuration="10.941648194s" podCreationTimestamp="2026-03-13 13:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:14:13.921650736 +0000 UTC m=+1227.575115905" watchObservedRunningTime="2026-03-13 13:14:13.941648194 +0000 UTC m=+1227.595113353" Mar 13 13:14:13.969294 master-0 kubenswrapper[28149]: I0313 13:14:13.969091 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 13:14:13.969294 master-0 kubenswrapper[28149]: I0313 13:14:13.969225 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 13:14:14.041744 master-0 kubenswrapper[28149]: I0313 13:14:14.041647 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:14.580692 master-0 kubenswrapper[28149]: I0313 13:14:14.580606 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=5.3046136090000005 podStartE2EDuration="12.580578365s" podCreationTimestamp="2026-03-13 13:14:02 +0000 UTC" firstStartedPulling="2026-03-13 13:14:04.499920412 +0000 UTC m=+1218.153385571" lastFinishedPulling="2026-03-13 13:14:11.775885168 +0000 UTC m=+1225.429350327" observedRunningTime="2026-03-13 13:14:14.574424159 +0000 UTC m=+1228.227889328" watchObservedRunningTime="2026-03-13 13:14:14.580578365 +0000 UTC m=+1228.234043534" Mar 13 13:14:14.656657 master-0 kubenswrapper[28149]: I0313 13:14:14.656586 28149 generic.go:334] "Generic (PLEG): container finished" podID="04c50291-fa01-44ba-8316-2cff471a4af4" containerID="addeb280f71ed2a311858abaf3c9bd138eb75417378585b9369064f1d9ecbeed" exitCode=143 Mar 13 13:14:14.656657 master-0 kubenswrapper[28149]: I0313 13:14:14.656725 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04c50291-fa01-44ba-8316-2cff471a4af4","Type":"ContainerDied","Data":"addeb280f71ed2a311858abaf3c9bd138eb75417378585b9369064f1d9ecbeed"} Mar 13 13:14:14.731296 master-0 kubenswrapper[28149]: I0313 13:14:14.728223 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=6.215836164 podStartE2EDuration="12.728199891s" podCreationTimestamp="2026-03-13 13:14:02 +0000 UTC" firstStartedPulling="2026-03-13 13:14:05.265115654 +0000 UTC m=+1218.918580813" lastFinishedPulling="2026-03-13 13:14:11.777479391 +0000 UTC m=+1225.430944540" observedRunningTime="2026-03-13 13:14:14.724870222 +0000 UTC m=+1228.378335401" watchObservedRunningTime="2026-03-13 13:14:14.728199891 +0000 UTC m=+1228.381665060" Mar 13 13:14:14.794833 master-0 kubenswrapper[28149]: I0313 13:14:14.794167 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=6.032860921 podStartE2EDuration="11.794084766s" podCreationTimestamp="2026-03-13 13:14:03 +0000 UTC" firstStartedPulling="2026-03-13 13:14:06.017836148 +0000 UTC m=+1219.671301307" lastFinishedPulling="2026-03-13 13:14:11.779059993 +0000 UTC m=+1225.432525152" observedRunningTime="2026-03-13 13:14:14.786211344 +0000 UTC m=+1228.439676543" watchObservedRunningTime="2026-03-13 13:14:14.794084766 +0000 UTC m=+1228.447549925" Mar 13 13:14:14.804063 master-0 kubenswrapper[28149]: I0313 13:14:14.803833 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.8092957819999995 podStartE2EDuration="12.803788537s" podCreationTimestamp="2026-03-13 13:14:02 +0000 UTC" firstStartedPulling="2026-03-13 13:14:04.781728302 +0000 UTC m=+1218.435193461" lastFinishedPulling="2026-03-13 13:14:11.776221057 +0000 UTC m=+1225.429686216" observedRunningTime="2026-03-13 13:14:14.758915608 +0000 UTC m=+1228.412380767" watchObservedRunningTime="2026-03-13 13:14:14.803788537 +0000 UTC m=+1228.457253696" Mar 13 13:14:18.119178 master-0 kubenswrapper[28149]: I0313 13:14:18.118523 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 13 13:14:19.336391 master-0 kubenswrapper[28149]: I0313 13:14:19.336321 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:14:19.450711 master-0 kubenswrapper[28149]: I0313 13:14:19.450607 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bfb994cb5-frl54"] Mar 13 13:14:19.451047 master-0 kubenswrapper[28149]: I0313 13:14:19.450972 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bfb994cb5-frl54" podUID="a41644d0-10a5-4e06-8da5-15690e85b5a3" containerName="dnsmasq-dns" containerID="cri-o://c5167e2ff12777fcacfa1b487a1191c2c2d458d61b55c98d4799c4ab3ac01275" gracePeriod=10 Mar 13 13:14:19.585283 master-0 kubenswrapper[28149]: I0313 13:14:19.584789 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bfb994cb5-frl54" podUID="a41644d0-10a5-4e06-8da5-15690e85b5a3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.251:5353: connect: connection refused" Mar 13 13:14:23.171706 master-0 kubenswrapper[28149]: I0313 13:14:23.171652 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 13 13:14:23.173977 master-0 kubenswrapper[28149]: I0313 13:14:23.173736 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 13:14:23.173977 master-0 kubenswrapper[28149]: I0313 13:14:23.173786 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 13:14:23.209284 master-0 kubenswrapper[28149]: I0313 13:14:23.206450 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 13 13:14:24.217645 master-0 kubenswrapper[28149]: I0313 13:14:24.217486 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec7cd004-beaf-4191-bb66-90f3adf2c8b5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.3:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:14:24.268506 master-0 kubenswrapper[28149]: I0313 13:14:24.268395 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec7cd004-beaf-4191-bb66-90f3adf2c8b5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.3:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:14:24.270765 master-0 kubenswrapper[28149]: I0313 13:14:24.270717 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 13 13:14:24.713277 master-0 kubenswrapper[28149]: I0313 13:14:24.711092 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bfb994cb5-frl54" podUID="a41644d0-10a5-4e06-8da5-15690e85b5a3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.251:5353: connect: connection refused" Mar 13 13:14:27.424309 master-0 kubenswrapper[28149]: I0313 13:14:27.424191 28149 generic.go:334] "Generic (PLEG): container finished" podID="f6c1cc46-fe3f-4495-be1d-5324d25d39ae" containerID="4dcaefba9d65d8d6e5fddbdd85aa196e1f96c691a3f119b0147e33418636f60b" exitCode=0 Mar 13 13:14:27.424309 master-0 kubenswrapper[28149]: I0313 13:14:27.424341 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vm54v" event={"ID":"f6c1cc46-fe3f-4495-be1d-5324d25d39ae","Type":"ContainerDied","Data":"4dcaefba9d65d8d6e5fddbdd85aa196e1f96c691a3f119b0147e33418636f60b"} Mar 13 13:14:27.435323 master-0 kubenswrapper[28149]: I0313 13:14:27.430917 28149 generic.go:334] "Generic (PLEG): container finished" podID="60fb7814-d2a1-47d5-9d6c-559bc67a2442" containerID="cf419a9daadaf0931eeed932fe0df14934a60ba615c52db31ef3c43500c57a5a" exitCode=0 Mar 13 13:14:27.435323 master-0 kubenswrapper[28149]: I0313 13:14:27.431042 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-99x6g" event={"ID":"60fb7814-d2a1-47d5-9d6c-559bc67a2442","Type":"ContainerDied","Data":"cf419a9daadaf0931eeed932fe0df14934a60ba615c52db31ef3c43500c57a5a"} Mar 13 13:14:27.439599 master-0 kubenswrapper[28149]: I0313 13:14:27.438475 28149 generic.go:334] "Generic (PLEG): container finished" podID="a41644d0-10a5-4e06-8da5-15690e85b5a3" containerID="c5167e2ff12777fcacfa1b487a1191c2c2d458d61b55c98d4799c4ab3ac01275" exitCode=0 Mar 13 13:14:27.439599 master-0 kubenswrapper[28149]: I0313 13:14:27.438848 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bfb994cb5-frl54" event={"ID":"a41644d0-10a5-4e06-8da5-15690e85b5a3","Type":"ContainerDied","Data":"c5167e2ff12777fcacfa1b487a1191c2c2d458d61b55c98d4799c4ab3ac01275"} Mar 13 13:14:27.672172 master-0 kubenswrapper[28149]: I0313 13:14:27.672097 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:14:27.846608 master-0 kubenswrapper[28149]: I0313 13:14:27.846474 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-dns-svc\") pod \"a41644d0-10a5-4e06-8da5-15690e85b5a3\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " Mar 13 13:14:27.846608 master-0 kubenswrapper[28149]: I0313 13:14:27.846573 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxb6r\" (UniqueName: \"kubernetes.io/projected/a41644d0-10a5-4e06-8da5-15690e85b5a3-kube-api-access-vxb6r\") pod \"a41644d0-10a5-4e06-8da5-15690e85b5a3\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " Mar 13 13:14:27.846877 master-0 kubenswrapper[28149]: I0313 13:14:27.846621 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-dns-swift-storage-0\") pod \"a41644d0-10a5-4e06-8da5-15690e85b5a3\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " Mar 13 13:14:27.846877 master-0 kubenswrapper[28149]: I0313 13:14:27.846686 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-ovsdbserver-sb\") pod \"a41644d0-10a5-4e06-8da5-15690e85b5a3\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " Mar 13 13:14:27.846877 master-0 kubenswrapper[28149]: I0313 13:14:27.846740 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-ovsdbserver-nb\") pod \"a41644d0-10a5-4e06-8da5-15690e85b5a3\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " Mar 13 13:14:27.847002 master-0 kubenswrapper[28149]: I0313 13:14:27.846887 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-config\") pod \"a41644d0-10a5-4e06-8da5-15690e85b5a3\" (UID: \"a41644d0-10a5-4e06-8da5-15690e85b5a3\") " Mar 13 13:14:27.853398 master-0 kubenswrapper[28149]: I0313 13:14:27.853340 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a41644d0-10a5-4e06-8da5-15690e85b5a3-kube-api-access-vxb6r" (OuterVolumeSpecName: "kube-api-access-vxb6r") pod "a41644d0-10a5-4e06-8da5-15690e85b5a3" (UID: "a41644d0-10a5-4e06-8da5-15690e85b5a3"). InnerVolumeSpecName "kube-api-access-vxb6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:14:27.910932 master-0 kubenswrapper[28149]: I0313 13:14:27.910860 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a41644d0-10a5-4e06-8da5-15690e85b5a3" (UID: "a41644d0-10a5-4e06-8da5-15690e85b5a3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:14:27.913103 master-0 kubenswrapper[28149]: I0313 13:14:27.913045 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a41644d0-10a5-4e06-8da5-15690e85b5a3" (UID: "a41644d0-10a5-4e06-8da5-15690e85b5a3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:14:27.921467 master-0 kubenswrapper[28149]: I0313 13:14:27.921399 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a41644d0-10a5-4e06-8da5-15690e85b5a3" (UID: "a41644d0-10a5-4e06-8da5-15690e85b5a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:14:27.921958 master-0 kubenswrapper[28149]: I0313 13:14:27.921910 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-config" (OuterVolumeSpecName: "config") pod "a41644d0-10a5-4e06-8da5-15690e85b5a3" (UID: "a41644d0-10a5-4e06-8da5-15690e85b5a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:14:27.933666 master-0 kubenswrapper[28149]: I0313 13:14:27.933616 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a41644d0-10a5-4e06-8da5-15690e85b5a3" (UID: "a41644d0-10a5-4e06-8da5-15690e85b5a3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:14:27.950050 master-0 kubenswrapper[28149]: I0313 13:14:27.950000 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:27.950050 master-0 kubenswrapper[28149]: I0313 13:14:27.950041 28149 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:27.950050 master-0 kubenswrapper[28149]: I0313 13:14:27.950056 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxb6r\" (UniqueName: \"kubernetes.io/projected/a41644d0-10a5-4e06-8da5-15690e85b5a3-kube-api-access-vxb6r\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:27.950352 master-0 kubenswrapper[28149]: I0313 13:14:27.950069 28149 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:27.950352 master-0 kubenswrapper[28149]: I0313 13:14:27.950080 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:27.950352 master-0 kubenswrapper[28149]: I0313 13:14:27.950094 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a41644d0-10a5-4e06-8da5-15690e85b5a3-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:28.452555 master-0 kubenswrapper[28149]: I0313 13:14:28.452415 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bfb994cb5-frl54" event={"ID":"a41644d0-10a5-4e06-8da5-15690e85b5a3","Type":"ContainerDied","Data":"3028ae800f33dbca5618a2a92ffa52cb9ac6043d15feb37e8fe345bd8ddc3808"} Mar 13 13:14:28.452555 master-0 kubenswrapper[28149]: I0313 13:14:28.452497 28149 scope.go:117] "RemoveContainer" containerID="c5167e2ff12777fcacfa1b487a1191c2c2d458d61b55c98d4799c4ab3ac01275" Mar 13 13:14:28.453457 master-0 kubenswrapper[28149]: I0313 13:14:28.452668 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bfb994cb5-frl54" Mar 13 13:14:28.470152 master-0 kubenswrapper[28149]: I0313 13:14:28.470079 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"66e0edeb-8711-49d6-9096-2b3f01751b4b","Type":"ContainerStarted","Data":"ce78c3c63040e6b0f3626adcd71d5eced9062d350abe76d542872b335a2c7cb5"} Mar 13 13:14:28.547575 master-0 kubenswrapper[28149]: I0313 13:14:28.533692 28149 scope.go:117] "RemoveContainer" containerID="dc8b4a5faa7f01895e44db8a6a56e24d21b2f0cd254b74ad984495d063ee75b0" Mar 13 13:14:28.555509 master-0 kubenswrapper[28149]: I0313 13:14:28.555466 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bfb994cb5-frl54"] Mar 13 13:14:28.586238 master-0 kubenswrapper[28149]: I0313 13:14:28.586123 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bfb994cb5-frl54"] Mar 13 13:14:28.601657 master-0 kubenswrapper[28149]: I0313 13:14:28.601564 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=4.685937269 podStartE2EDuration="27.601542465s" podCreationTimestamp="2026-03-13 13:14:01 +0000 UTC" firstStartedPulling="2026-03-13 13:14:04.476084 +0000 UTC m=+1218.129549159" lastFinishedPulling="2026-03-13 13:14:27.391689206 +0000 UTC m=+1241.045154355" observedRunningTime="2026-03-13 13:14:28.535989269 +0000 UTC m=+1242.189454428" watchObservedRunningTime="2026-03-13 13:14:28.601542465 +0000 UTC m=+1242.255007624" Mar 13 13:14:28.708602 master-0 kubenswrapper[28149]: I0313 13:14:28.708477 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a41644d0-10a5-4e06-8da5-15690e85b5a3" path="/var/lib/kubelet/pods/a41644d0-10a5-4e06-8da5-15690e85b5a3/volumes" Mar 13 13:14:29.001927 master-0 kubenswrapper[28149]: I0313 13:14:29.001224 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:29.027162 master-0 kubenswrapper[28149]: I0313 13:14:29.026236 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-scripts\") pod \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " Mar 13 13:14:29.027162 master-0 kubenswrapper[28149]: I0313 13:14:29.026546 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-combined-ca-bundle\") pod \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " Mar 13 13:14:29.027162 master-0 kubenswrapper[28149]: I0313 13:14:29.026729 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghwwx\" (UniqueName: \"kubernetes.io/projected/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-kube-api-access-ghwwx\") pod \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " Mar 13 13:14:29.027162 master-0 kubenswrapper[28149]: I0313 13:14:29.026789 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-config-data\") pod \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\" (UID: \"f6c1cc46-fe3f-4495-be1d-5324d25d39ae\") " Mar 13 13:14:29.034000 master-0 kubenswrapper[28149]: I0313 13:14:29.031025 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-kube-api-access-ghwwx" (OuterVolumeSpecName: "kube-api-access-ghwwx") pod "f6c1cc46-fe3f-4495-be1d-5324d25d39ae" (UID: "f6c1cc46-fe3f-4495-be1d-5324d25d39ae"). InnerVolumeSpecName "kube-api-access-ghwwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:14:29.034000 master-0 kubenswrapper[28149]: I0313 13:14:29.032131 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-scripts" (OuterVolumeSpecName: "scripts") pod "f6c1cc46-fe3f-4495-be1d-5324d25d39ae" (UID: "f6c1cc46-fe3f-4495-be1d-5324d25d39ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:14:29.076065 master-0 kubenswrapper[28149]: I0313 13:14:29.073637 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-config-data" (OuterVolumeSpecName: "config-data") pod "f6c1cc46-fe3f-4495-be1d-5324d25d39ae" (UID: "f6c1cc46-fe3f-4495-be1d-5324d25d39ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:14:29.090877 master-0 kubenswrapper[28149]: I0313 13:14:29.090819 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6c1cc46-fe3f-4495-be1d-5324d25d39ae" (UID: "f6c1cc46-fe3f-4495-be1d-5324d25d39ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:14:29.121677 master-0 kubenswrapper[28149]: I0313 13:14:29.121633 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:29.130778 master-0 kubenswrapper[28149]: I0313 13:14:29.130739 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:29.130778 master-0 kubenswrapper[28149]: I0313 13:14:29.130773 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:29.131023 master-0 kubenswrapper[28149]: I0313 13:14:29.130874 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:29.131023 master-0 kubenswrapper[28149]: I0313 13:14:29.130964 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghwwx\" (UniqueName: \"kubernetes.io/projected/f6c1cc46-fe3f-4495-be1d-5324d25d39ae-kube-api-access-ghwwx\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:29.232786 master-0 kubenswrapper[28149]: I0313 13:14:29.232737 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7qs4\" (UniqueName: \"kubernetes.io/projected/60fb7814-d2a1-47d5-9d6c-559bc67a2442-kube-api-access-g7qs4\") pod \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " Mar 13 13:14:29.233010 master-0 kubenswrapper[28149]: I0313 13:14:29.232816 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-combined-ca-bundle\") pod \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " Mar 13 13:14:29.233091 master-0 kubenswrapper[28149]: I0313 13:14:29.233032 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-scripts\") pod \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " Mar 13 13:14:29.233206 master-0 kubenswrapper[28149]: I0313 13:14:29.233119 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-config-data\") pod \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\" (UID: \"60fb7814-d2a1-47d5-9d6c-559bc67a2442\") " Mar 13 13:14:29.235953 master-0 kubenswrapper[28149]: I0313 13:14:29.235892 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-scripts" (OuterVolumeSpecName: "scripts") pod "60fb7814-d2a1-47d5-9d6c-559bc67a2442" (UID: "60fb7814-d2a1-47d5-9d6c-559bc67a2442"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:14:29.236958 master-0 kubenswrapper[28149]: I0313 13:14:29.236898 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60fb7814-d2a1-47d5-9d6c-559bc67a2442-kube-api-access-g7qs4" (OuterVolumeSpecName: "kube-api-access-g7qs4") pod "60fb7814-d2a1-47d5-9d6c-559bc67a2442" (UID: "60fb7814-d2a1-47d5-9d6c-559bc67a2442"). InnerVolumeSpecName "kube-api-access-g7qs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:14:29.260205 master-0 kubenswrapper[28149]: I0313 13:14:29.260065 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60fb7814-d2a1-47d5-9d6c-559bc67a2442" (UID: "60fb7814-d2a1-47d5-9d6c-559bc67a2442"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:14:29.266831 master-0 kubenswrapper[28149]: I0313 13:14:29.266778 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-config-data" (OuterVolumeSpecName: "config-data") pod "60fb7814-d2a1-47d5-9d6c-559bc67a2442" (UID: "60fb7814-d2a1-47d5-9d6c-559bc67a2442"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:14:29.336041 master-0 kubenswrapper[28149]: I0313 13:14:29.336009 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:29.336315 master-0 kubenswrapper[28149]: I0313 13:14:29.336298 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:29.336420 master-0 kubenswrapper[28149]: I0313 13:14:29.336407 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7qs4\" (UniqueName: \"kubernetes.io/projected/60fb7814-d2a1-47d5-9d6c-559bc67a2442-kube-api-access-g7qs4\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:29.336487 master-0 kubenswrapper[28149]: I0313 13:14:29.336477 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fb7814-d2a1-47d5-9d6c-559bc67a2442-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:29.508493 master-0 kubenswrapper[28149]: I0313 13:14:29.508437 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vm54v" Mar 13 13:14:29.514324 master-0 kubenswrapper[28149]: I0313 13:14:29.513676 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vm54v" event={"ID":"f6c1cc46-fe3f-4495-be1d-5324d25d39ae","Type":"ContainerDied","Data":"bd7eb3988fa708cc747dcd913262473768da85c0dc17da03a3a13188c599a0a0"} Mar 13 13:14:29.514324 master-0 kubenswrapper[28149]: I0313 13:14:29.513750 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd7eb3988fa708cc747dcd913262473768da85c0dc17da03a3a13188c599a0a0" Mar 13 13:14:29.518052 master-0 kubenswrapper[28149]: I0313 13:14:29.518002 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-99x6g" event={"ID":"60fb7814-d2a1-47d5-9d6c-559bc67a2442","Type":"ContainerDied","Data":"1fd85d8213855d54af876fbdeb293514d9005ae5c709b8dd6582d3a15db96fe9"} Mar 13 13:14:29.518203 master-0 kubenswrapper[28149]: I0313 13:14:29.518058 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fd85d8213855d54af876fbdeb293514d9005ae5c709b8dd6582d3a15db96fe9" Mar 13 13:14:29.518203 master-0 kubenswrapper[28149]: I0313 13:14:29.518056 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-99x6g" Mar 13 13:14:29.521523 master-0 kubenswrapper[28149]: I0313 13:14:29.521485 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 13:14:29.588168 master-0 kubenswrapper[28149]: I0313 13:14:29.585626 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 13:14:29.630329 master-0 kubenswrapper[28149]: I0313 13:14:29.630113 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 13 13:14:29.630877 master-0 kubenswrapper[28149]: E0313 13:14:29.630756 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c1cc46-fe3f-4495-be1d-5324d25d39ae" containerName="nova-manage" Mar 13 13:14:29.630877 master-0 kubenswrapper[28149]: I0313 13:14:29.630786 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c1cc46-fe3f-4495-be1d-5324d25d39ae" containerName="nova-manage" Mar 13 13:14:29.630877 master-0 kubenswrapper[28149]: E0313 13:14:29.630808 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fb7814-d2a1-47d5-9d6c-559bc67a2442" containerName="nova-cell1-conductor-db-sync" Mar 13 13:14:29.630877 master-0 kubenswrapper[28149]: I0313 13:14:29.630816 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fb7814-d2a1-47d5-9d6c-559bc67a2442" containerName="nova-cell1-conductor-db-sync" Mar 13 13:14:29.630877 master-0 kubenswrapper[28149]: E0313 13:14:29.630853 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41644d0-10a5-4e06-8da5-15690e85b5a3" containerName="init" Mar 13 13:14:29.630877 master-0 kubenswrapper[28149]: I0313 13:14:29.630859 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41644d0-10a5-4e06-8da5-15690e85b5a3" containerName="init" Mar 13 13:14:29.631324 master-0 kubenswrapper[28149]: E0313 13:14:29.630896 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41644d0-10a5-4e06-8da5-15690e85b5a3" containerName="dnsmasq-dns" Mar 13 13:14:29.631324 master-0 kubenswrapper[28149]: I0313 13:14:29.630903 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41644d0-10a5-4e06-8da5-15690e85b5a3" containerName="dnsmasq-dns" Mar 13 13:14:29.631324 master-0 kubenswrapper[28149]: I0313 13:14:29.631186 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c1cc46-fe3f-4495-be1d-5324d25d39ae" containerName="nova-manage" Mar 13 13:14:29.631324 master-0 kubenswrapper[28149]: I0313 13:14:29.631207 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41644d0-10a5-4e06-8da5-15690e85b5a3" containerName="dnsmasq-dns" Mar 13 13:14:29.631324 master-0 kubenswrapper[28149]: I0313 13:14:29.631223 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="60fb7814-d2a1-47d5-9d6c-559bc67a2442" containerName="nova-cell1-conductor-db-sync" Mar 13 13:14:29.632151 master-0 kubenswrapper[28149]: I0313 13:14:29.632033 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 13 13:14:29.649955 master-0 kubenswrapper[28149]: I0313 13:14:29.649901 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 13 13:14:29.660724 master-0 kubenswrapper[28149]: I0313 13:14:29.657108 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 13 13:14:29.741266 master-0 kubenswrapper[28149]: I0313 13:14:29.741205 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:14:29.742638 master-0 kubenswrapper[28149]: I0313 13:14:29.741608 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ec7cd004-beaf-4191-bb66-90f3adf2c8b5" containerName="nova-api-log" containerID="cri-o://85e519ba126df675a0a525ca5713d2c9a49f536033b4155ff638ce97ea0e4e2d" gracePeriod=30 Mar 13 13:14:29.742638 master-0 kubenswrapper[28149]: I0313 13:14:29.741801 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ec7cd004-beaf-4191-bb66-90f3adf2c8b5" containerName="nova-api-api" containerID="cri-o://a45b7dffc687821ce6c7d6ded423c455d269ce9e4d2cc2704c2eeb371169df28" gracePeriod=30 Mar 13 13:14:29.756895 master-0 kubenswrapper[28149]: I0313 13:14:29.756805 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d7d2847-108c-40e0-9195-2cbf3178858e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9d7d2847-108c-40e0-9195-2cbf3178858e\") " pod="openstack/nova-cell1-conductor-0" Mar 13 13:14:29.757217 master-0 kubenswrapper[28149]: I0313 13:14:29.757114 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlz26\" (UniqueName: \"kubernetes.io/projected/9d7d2847-108c-40e0-9195-2cbf3178858e-kube-api-access-zlz26\") pod \"nova-cell1-conductor-0\" (UID: \"9d7d2847-108c-40e0-9195-2cbf3178858e\") " pod="openstack/nova-cell1-conductor-0" Mar 13 13:14:29.757217 master-0 kubenswrapper[28149]: I0313 13:14:29.757162 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d7d2847-108c-40e0-9195-2cbf3178858e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9d7d2847-108c-40e0-9195-2cbf3178858e\") " pod="openstack/nova-cell1-conductor-0" Mar 13 13:14:29.778861 master-0 kubenswrapper[28149]: I0313 13:14:29.778261 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 13:14:29.778861 master-0 kubenswrapper[28149]: I0313 13:14:29.778539 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e422fd27-d0eb-49f9-a05d-7fd39eb9fada" containerName="nova-scheduler-scheduler" containerID="cri-o://0b158de554ea1b22d2a6593f4ed06ad111e825d9b3b670395ed04447bafa5c3d" gracePeriod=30 Mar 13 13:14:29.859782 master-0 kubenswrapper[28149]: I0313 13:14:29.859735 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d7d2847-108c-40e0-9195-2cbf3178858e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9d7d2847-108c-40e0-9195-2cbf3178858e\") " pod="openstack/nova-cell1-conductor-0" Mar 13 13:14:29.859957 master-0 kubenswrapper[28149]: I0313 13:14:29.859932 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlz26\" (UniqueName: \"kubernetes.io/projected/9d7d2847-108c-40e0-9195-2cbf3178858e-kube-api-access-zlz26\") pod \"nova-cell1-conductor-0\" (UID: \"9d7d2847-108c-40e0-9195-2cbf3178858e\") " pod="openstack/nova-cell1-conductor-0" Mar 13 13:14:29.860030 master-0 kubenswrapper[28149]: I0313 13:14:29.859963 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d7d2847-108c-40e0-9195-2cbf3178858e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9d7d2847-108c-40e0-9195-2cbf3178858e\") " pod="openstack/nova-cell1-conductor-0" Mar 13 13:14:29.866870 master-0 kubenswrapper[28149]: I0313 13:14:29.863694 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d7d2847-108c-40e0-9195-2cbf3178858e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9d7d2847-108c-40e0-9195-2cbf3178858e\") " pod="openstack/nova-cell1-conductor-0" Mar 13 13:14:29.867915 master-0 kubenswrapper[28149]: I0313 13:14:29.867877 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d7d2847-108c-40e0-9195-2cbf3178858e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9d7d2847-108c-40e0-9195-2cbf3178858e\") " pod="openstack/nova-cell1-conductor-0" Mar 13 13:14:29.890161 master-0 kubenswrapper[28149]: I0313 13:14:29.884706 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlz26\" (UniqueName: \"kubernetes.io/projected/9d7d2847-108c-40e0-9195-2cbf3178858e-kube-api-access-zlz26\") pod \"nova-cell1-conductor-0\" (UID: \"9d7d2847-108c-40e0-9195-2cbf3178858e\") " pod="openstack/nova-cell1-conductor-0" Mar 13 13:14:29.984693 master-0 kubenswrapper[28149]: I0313 13:14:29.984572 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 13 13:14:30.546859 master-0 kubenswrapper[28149]: I0313 13:14:30.546802 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 13 13:14:30.550208 master-0 kubenswrapper[28149]: I0313 13:14:30.550107 28149 generic.go:334] "Generic (PLEG): container finished" podID="ec7cd004-beaf-4191-bb66-90f3adf2c8b5" containerID="85e519ba126df675a0a525ca5713d2c9a49f536033b4155ff638ce97ea0e4e2d" exitCode=143 Mar 13 13:14:30.551597 master-0 kubenswrapper[28149]: I0313 13:14:30.551558 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec7cd004-beaf-4191-bb66-90f3adf2c8b5","Type":"ContainerDied","Data":"85e519ba126df675a0a525ca5713d2c9a49f536033b4155ff638ce97ea0e4e2d"} Mar 13 13:14:31.580571 master-0 kubenswrapper[28149]: I0313 13:14:31.580505 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9d7d2847-108c-40e0-9195-2cbf3178858e","Type":"ContainerStarted","Data":"24f586404bd99fdd555633a0868e0554d1ff8c9635dfdc4bfa54a836e4cb3322"} Mar 13 13:14:31.580571 master-0 kubenswrapper[28149]: I0313 13:14:31.580570 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9d7d2847-108c-40e0-9195-2cbf3178858e","Type":"ContainerStarted","Data":"e8354ae2d87608a63862905cbb8a90f2ab2d6aa677b118a479033ff55c3d5688"} Mar 13 13:14:31.581488 master-0 kubenswrapper[28149]: I0313 13:14:31.581458 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 13 13:14:31.583051 master-0 kubenswrapper[28149]: I0313 13:14:31.582997 28149 generic.go:334] "Generic (PLEG): container finished" podID="e422fd27-d0eb-49f9-a05d-7fd39eb9fada" containerID="0b158de554ea1b22d2a6593f4ed06ad111e825d9b3b670395ed04447bafa5c3d" exitCode=0 Mar 13 13:14:31.583154 master-0 kubenswrapper[28149]: I0313 13:14:31.583082 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e422fd27-d0eb-49f9-a05d-7fd39eb9fada","Type":"ContainerDied","Data":"0b158de554ea1b22d2a6593f4ed06ad111e825d9b3b670395ed04447bafa5c3d"} Mar 13 13:14:31.624482 master-0 kubenswrapper[28149]: I0313 13:14:31.611473 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.611452339 podStartE2EDuration="2.611452339s" podCreationTimestamp="2026-03-13 13:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:14:31.609271711 +0000 UTC m=+1245.262736870" watchObservedRunningTime="2026-03-13 13:14:31.611452339 +0000 UTC m=+1245.264917498" Mar 13 13:14:32.367343 master-0 kubenswrapper[28149]: I0313 13:14:32.367292 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 13:14:32.411111 master-0 kubenswrapper[28149]: I0313 13:14:32.411007 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbhw6\" (UniqueName: \"kubernetes.io/projected/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-kube-api-access-dbhw6\") pod \"e422fd27-d0eb-49f9-a05d-7fd39eb9fada\" (UID: \"e422fd27-d0eb-49f9-a05d-7fd39eb9fada\") " Mar 13 13:14:32.411878 master-0 kubenswrapper[28149]: I0313 13:14:32.411123 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-combined-ca-bundle\") pod \"e422fd27-d0eb-49f9-a05d-7fd39eb9fada\" (UID: \"e422fd27-d0eb-49f9-a05d-7fd39eb9fada\") " Mar 13 13:14:32.411878 master-0 kubenswrapper[28149]: I0313 13:14:32.411164 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-config-data\") pod \"e422fd27-d0eb-49f9-a05d-7fd39eb9fada\" (UID: \"e422fd27-d0eb-49f9-a05d-7fd39eb9fada\") " Mar 13 13:14:32.421016 master-0 kubenswrapper[28149]: I0313 13:14:32.420964 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-kube-api-access-dbhw6" (OuterVolumeSpecName: "kube-api-access-dbhw6") pod "e422fd27-d0eb-49f9-a05d-7fd39eb9fada" (UID: "e422fd27-d0eb-49f9-a05d-7fd39eb9fada"). InnerVolumeSpecName "kube-api-access-dbhw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:14:32.454787 master-0 kubenswrapper[28149]: I0313 13:14:32.454651 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e422fd27-d0eb-49f9-a05d-7fd39eb9fada" (UID: "e422fd27-d0eb-49f9-a05d-7fd39eb9fada"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:14:32.455080 master-0 kubenswrapper[28149]: I0313 13:14:32.455028 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-config-data" (OuterVolumeSpecName: "config-data") pod "e422fd27-d0eb-49f9-a05d-7fd39eb9fada" (UID: "e422fd27-d0eb-49f9-a05d-7fd39eb9fada"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:14:32.516461 master-0 kubenswrapper[28149]: I0313 13:14:32.514875 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbhw6\" (UniqueName: \"kubernetes.io/projected/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-kube-api-access-dbhw6\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:32.516461 master-0 kubenswrapper[28149]: I0313 13:14:32.514919 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:32.516461 master-0 kubenswrapper[28149]: I0313 13:14:32.514929 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e422fd27-d0eb-49f9-a05d-7fd39eb9fada-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:32.604428 master-0 kubenswrapper[28149]: I0313 13:14:32.604323 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 13:14:32.605300 master-0 kubenswrapper[28149]: I0313 13:14:32.604333 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e422fd27-d0eb-49f9-a05d-7fd39eb9fada","Type":"ContainerDied","Data":"0e75dcc278761ae18aa52969fcfb6f68f0e04e9d456099c5888207a6f6e15da8"} Mar 13 13:14:32.605419 master-0 kubenswrapper[28149]: I0313 13:14:32.605309 28149 scope.go:117] "RemoveContainer" containerID="0b158de554ea1b22d2a6593f4ed06ad111e825d9b3b670395ed04447bafa5c3d" Mar 13 13:14:32.656203 master-0 kubenswrapper[28149]: I0313 13:14:32.656122 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 13:14:32.680352 master-0 kubenswrapper[28149]: I0313 13:14:32.680276 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 13:14:32.723302 master-0 kubenswrapper[28149]: I0313 13:14:32.720961 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e422fd27-d0eb-49f9-a05d-7fd39eb9fada" path="/var/lib/kubelet/pods/e422fd27-d0eb-49f9-a05d-7fd39eb9fada/volumes" Mar 13 13:14:32.729272 master-0 kubenswrapper[28149]: I0313 13:14:32.729220 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 13:14:32.731130 master-0 kubenswrapper[28149]: E0313 13:14:32.730063 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e422fd27-d0eb-49f9-a05d-7fd39eb9fada" containerName="nova-scheduler-scheduler" Mar 13 13:14:32.731310 master-0 kubenswrapper[28149]: I0313 13:14:32.731150 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="e422fd27-d0eb-49f9-a05d-7fd39eb9fada" containerName="nova-scheduler-scheduler" Mar 13 13:14:32.731712 master-0 kubenswrapper[28149]: I0313 13:14:32.731667 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="e422fd27-d0eb-49f9-a05d-7fd39eb9fada" containerName="nova-scheduler-scheduler" Mar 13 13:14:32.732792 master-0 kubenswrapper[28149]: I0313 13:14:32.732764 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 13:14:32.737968 master-0 kubenswrapper[28149]: I0313 13:14:32.737718 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 13 13:14:32.769245 master-0 kubenswrapper[28149]: I0313 13:14:32.761481 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 13:14:33.061495 master-0 kubenswrapper[28149]: I0313 13:14:33.060696 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 13:14:33.061495 master-0 kubenswrapper[28149]: I0313 13:14:33.060773 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 13:14:33.072467 master-0 kubenswrapper[28149]: I0313 13:14:33.069464 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8751eeff-49e7-416e-8f8e-037bc9e956e6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8751eeff-49e7-416e-8f8e-037bc9e956e6\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:33.072467 master-0 kubenswrapper[28149]: I0313 13:14:33.069512 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cbjw\" (UniqueName: \"kubernetes.io/projected/8751eeff-49e7-416e-8f8e-037bc9e956e6-kube-api-access-4cbjw\") pod \"nova-scheduler-0\" (UID: \"8751eeff-49e7-416e-8f8e-037bc9e956e6\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:33.072467 master-0 kubenswrapper[28149]: I0313 13:14:33.069569 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8751eeff-49e7-416e-8f8e-037bc9e956e6-config-data\") pod \"nova-scheduler-0\" (UID: \"8751eeff-49e7-416e-8f8e-037bc9e956e6\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:33.174291 master-0 kubenswrapper[28149]: I0313 13:14:33.171948 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8751eeff-49e7-416e-8f8e-037bc9e956e6-config-data\") pod \"nova-scheduler-0\" (UID: \"8751eeff-49e7-416e-8f8e-037bc9e956e6\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:33.174291 master-0 kubenswrapper[28149]: I0313 13:14:33.172336 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cbjw\" (UniqueName: \"kubernetes.io/projected/8751eeff-49e7-416e-8f8e-037bc9e956e6-kube-api-access-4cbjw\") pod \"nova-scheduler-0\" (UID: \"8751eeff-49e7-416e-8f8e-037bc9e956e6\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:33.174291 master-0 kubenswrapper[28149]: I0313 13:14:33.172369 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8751eeff-49e7-416e-8f8e-037bc9e956e6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8751eeff-49e7-416e-8f8e-037bc9e956e6\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:33.175566 master-0 kubenswrapper[28149]: I0313 13:14:33.175340 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8751eeff-49e7-416e-8f8e-037bc9e956e6-config-data\") pod \"nova-scheduler-0\" (UID: \"8751eeff-49e7-416e-8f8e-037bc9e956e6\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:33.178105 master-0 kubenswrapper[28149]: I0313 13:14:33.177934 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8751eeff-49e7-416e-8f8e-037bc9e956e6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8751eeff-49e7-416e-8f8e-037bc9e956e6\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:33.196784 master-0 kubenswrapper[28149]: I0313 13:14:33.192826 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cbjw\" (UniqueName: \"kubernetes.io/projected/8751eeff-49e7-416e-8f8e-037bc9e956e6-kube-api-access-4cbjw\") pod \"nova-scheduler-0\" (UID: \"8751eeff-49e7-416e-8f8e-037bc9e956e6\") " pod="openstack/nova-scheduler-0" Mar 13 13:14:33.402803 master-0 kubenswrapper[28149]: I0313 13:14:33.398452 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 13:14:33.645627 master-0 kubenswrapper[28149]: I0313 13:14:33.645239 28149 generic.go:334] "Generic (PLEG): container finished" podID="ec7cd004-beaf-4191-bb66-90f3adf2c8b5" containerID="a45b7dffc687821ce6c7d6ded423c455d269ce9e4d2cc2704c2eeb371169df28" exitCode=0 Mar 13 13:14:33.645627 master-0 kubenswrapper[28149]: I0313 13:14:33.645284 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec7cd004-beaf-4191-bb66-90f3adf2c8b5","Type":"ContainerDied","Data":"a45b7dffc687821ce6c7d6ded423c455d269ce9e4d2cc2704c2eeb371169df28"} Mar 13 13:14:33.645627 master-0 kubenswrapper[28149]: I0313 13:14:33.645362 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec7cd004-beaf-4191-bb66-90f3adf2c8b5","Type":"ContainerDied","Data":"eed6e14967aa4c0d4bd79fabdb5bddec045b4a9ac619435ba55c33399e59ac2f"} Mar 13 13:14:33.645627 master-0 kubenswrapper[28149]: I0313 13:14:33.645376 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eed6e14967aa4c0d4bd79fabdb5bddec045b4a9ac619435ba55c33399e59ac2f" Mar 13 13:14:33.647183 master-0 kubenswrapper[28149]: I0313 13:14:33.647130 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 13:14:33.803538 master-0 kubenswrapper[28149]: I0313 13:14:33.803480 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-config-data\") pod \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " Mar 13 13:14:33.803775 master-0 kubenswrapper[28149]: I0313 13:14:33.803630 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-logs\") pod \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " Mar 13 13:14:33.803894 master-0 kubenswrapper[28149]: I0313 13:14:33.803865 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-combined-ca-bundle\") pod \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " Mar 13 13:14:33.804007 master-0 kubenswrapper[28149]: I0313 13:14:33.803981 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkfbz\" (UniqueName: \"kubernetes.io/projected/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-kube-api-access-rkfbz\") pod \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\" (UID: \"ec7cd004-beaf-4191-bb66-90f3adf2c8b5\") " Mar 13 13:14:33.807124 master-0 kubenswrapper[28149]: I0313 13:14:33.807071 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-logs" (OuterVolumeSpecName: "logs") pod "ec7cd004-beaf-4191-bb66-90f3adf2c8b5" (UID: "ec7cd004-beaf-4191-bb66-90f3adf2c8b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:14:33.810538 master-0 kubenswrapper[28149]: I0313 13:14:33.810483 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-kube-api-access-rkfbz" (OuterVolumeSpecName: "kube-api-access-rkfbz") pod "ec7cd004-beaf-4191-bb66-90f3adf2c8b5" (UID: "ec7cd004-beaf-4191-bb66-90f3adf2c8b5"). InnerVolumeSpecName "kube-api-access-rkfbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:14:34.225043 master-0 kubenswrapper[28149]: I0313 13:14:34.223986 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkfbz\" (UniqueName: \"kubernetes.io/projected/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-kube-api-access-rkfbz\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:34.225043 master-0 kubenswrapper[28149]: I0313 13:14:34.224038 28149 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:34.230655 master-0 kubenswrapper[28149]: I0313 13:14:34.230189 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-config-data" (OuterVolumeSpecName: "config-data") pod "ec7cd004-beaf-4191-bb66-90f3adf2c8b5" (UID: "ec7cd004-beaf-4191-bb66-90f3adf2c8b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:14:34.250109 master-0 kubenswrapper[28149]: I0313 13:14:34.248929 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec7cd004-beaf-4191-bb66-90f3adf2c8b5" (UID: "ec7cd004-beaf-4191-bb66-90f3adf2c8b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:14:34.315005 master-0 kubenswrapper[28149]: W0313 13:14:34.314936 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8751eeff_49e7_416e_8f8e_037bc9e956e6.slice/crio-0769b7dbd8234cf51235da9db6f777d650ff81e1ee934b61c686eb87e68df202 WatchSource:0}: Error finding container 0769b7dbd8234cf51235da9db6f777d650ff81e1ee934b61c686eb87e68df202: Status 404 returned error can't find the container with id 0769b7dbd8234cf51235da9db6f777d650ff81e1ee934b61c686eb87e68df202 Mar 13 13:14:34.322427 master-0 kubenswrapper[28149]: I0313 13:14:34.321862 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 13:14:34.341167 master-0 kubenswrapper[28149]: I0313 13:14:34.340022 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:34.341167 master-0 kubenswrapper[28149]: I0313 13:14:34.340063 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec7cd004-beaf-4191-bb66-90f3adf2c8b5-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:34.685413 master-0 kubenswrapper[28149]: I0313 13:14:34.683825 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 13:14:34.686420 master-0 kubenswrapper[28149]: I0313 13:14:34.686353 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8751eeff-49e7-416e-8f8e-037bc9e956e6","Type":"ContainerStarted","Data":"cc2bac34fe820d08bbd47cdca973999db27c61906cfb7d5b845736f73690704d"} Mar 13 13:14:34.686420 master-0 kubenswrapper[28149]: I0313 13:14:34.686405 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8751eeff-49e7-416e-8f8e-037bc9e956e6","Type":"ContainerStarted","Data":"0769b7dbd8234cf51235da9db6f777d650ff81e1ee934b61c686eb87e68df202"} Mar 13 13:14:34.714500 master-0 kubenswrapper[28149]: I0313 13:14:34.713919 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7138951369999997 podStartE2EDuration="2.713895137s" podCreationTimestamp="2026-03-13 13:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:14:34.711098551 +0000 UTC m=+1248.364563700" watchObservedRunningTime="2026-03-13 13:14:34.713895137 +0000 UTC m=+1248.367360296" Mar 13 13:14:34.756529 master-0 kubenswrapper[28149]: I0313 13:14:34.754573 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:14:34.772164 master-0 kubenswrapper[28149]: I0313 13:14:34.771594 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:14:34.787394 master-0 kubenswrapper[28149]: I0313 13:14:34.782921 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 13:14:34.787394 master-0 kubenswrapper[28149]: E0313 13:14:34.783521 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7cd004-beaf-4191-bb66-90f3adf2c8b5" containerName="nova-api-api" Mar 13 13:14:34.787394 master-0 kubenswrapper[28149]: I0313 13:14:34.783539 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7cd004-beaf-4191-bb66-90f3adf2c8b5" containerName="nova-api-api" Mar 13 13:14:34.787394 master-0 kubenswrapper[28149]: E0313 13:14:34.783565 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7cd004-beaf-4191-bb66-90f3adf2c8b5" containerName="nova-api-log" Mar 13 13:14:34.787394 master-0 kubenswrapper[28149]: I0313 13:14:34.783571 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7cd004-beaf-4191-bb66-90f3adf2c8b5" containerName="nova-api-log" Mar 13 13:14:34.787394 master-0 kubenswrapper[28149]: I0313 13:14:34.783864 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec7cd004-beaf-4191-bb66-90f3adf2c8b5" containerName="nova-api-log" Mar 13 13:14:34.787394 master-0 kubenswrapper[28149]: I0313 13:14:34.783897 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec7cd004-beaf-4191-bb66-90f3adf2c8b5" containerName="nova-api-api" Mar 13 13:14:34.787394 master-0 kubenswrapper[28149]: I0313 13:14:34.785500 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 13:14:34.795154 master-0 kubenswrapper[28149]: I0313 13:14:34.792954 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 13:14:34.829765 master-0 kubenswrapper[28149]: I0313 13:14:34.829665 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:14:34.983429 master-0 kubenswrapper[28149]: I0313 13:14:34.983350 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6xh2\" (UniqueName: \"kubernetes.io/projected/2558d1e1-d076-4b79-8591-d1e7ec5beaec-kube-api-access-d6xh2\") pod \"nova-api-0\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " pod="openstack/nova-api-0" Mar 13 13:14:34.983672 master-0 kubenswrapper[28149]: I0313 13:14:34.983542 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2558d1e1-d076-4b79-8591-d1e7ec5beaec-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " pod="openstack/nova-api-0" Mar 13 13:14:34.983770 master-0 kubenswrapper[28149]: I0313 13:14:34.983740 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2558d1e1-d076-4b79-8591-d1e7ec5beaec-logs\") pod \"nova-api-0\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " pod="openstack/nova-api-0" Mar 13 13:14:34.983834 master-0 kubenswrapper[28149]: I0313 13:14:34.983769 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2558d1e1-d076-4b79-8591-d1e7ec5beaec-config-data\") pod \"nova-api-0\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " pod="openstack/nova-api-0" Mar 13 13:14:35.086264 master-0 kubenswrapper[28149]: I0313 13:14:35.086106 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2558d1e1-d076-4b79-8591-d1e7ec5beaec-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " pod="openstack/nova-api-0" Mar 13 13:14:35.087115 master-0 kubenswrapper[28149]: I0313 13:14:35.087072 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2558d1e1-d076-4b79-8591-d1e7ec5beaec-logs\") pod \"nova-api-0\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " pod="openstack/nova-api-0" Mar 13 13:14:35.087557 master-0 kubenswrapper[28149]: I0313 13:14:35.087523 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2558d1e1-d076-4b79-8591-d1e7ec5beaec-logs\") pod \"nova-api-0\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " pod="openstack/nova-api-0" Mar 13 13:14:35.089636 master-0 kubenswrapper[28149]: I0313 13:14:35.087108 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2558d1e1-d076-4b79-8591-d1e7ec5beaec-config-data\") pod \"nova-api-0\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " pod="openstack/nova-api-0" Mar 13 13:14:35.089861 master-0 kubenswrapper[28149]: I0313 13:14:35.089828 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6xh2\" (UniqueName: \"kubernetes.io/projected/2558d1e1-d076-4b79-8591-d1e7ec5beaec-kube-api-access-d6xh2\") pod \"nova-api-0\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " pod="openstack/nova-api-0" Mar 13 13:14:35.090585 master-0 kubenswrapper[28149]: I0313 13:14:35.090533 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2558d1e1-d076-4b79-8591-d1e7ec5beaec-config-data\") pod \"nova-api-0\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " pod="openstack/nova-api-0" Mar 13 13:14:35.104695 master-0 kubenswrapper[28149]: I0313 13:14:35.104643 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2558d1e1-d076-4b79-8591-d1e7ec5beaec-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " pod="openstack/nova-api-0" Mar 13 13:14:35.107697 master-0 kubenswrapper[28149]: I0313 13:14:35.107654 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6xh2\" (UniqueName: \"kubernetes.io/projected/2558d1e1-d076-4b79-8591-d1e7ec5beaec-kube-api-access-d6xh2\") pod \"nova-api-0\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " pod="openstack/nova-api-0" Mar 13 13:14:35.147930 master-0 kubenswrapper[28149]: I0313 13:14:35.147859 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 13:14:35.665203 master-0 kubenswrapper[28149]: I0313 13:14:35.660281 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:14:35.741813 master-0 kubenswrapper[28149]: I0313 13:14:35.741650 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2558d1e1-d076-4b79-8591-d1e7ec5beaec","Type":"ContainerStarted","Data":"2631daf48b0300f9351943060982ca4c28e91094ad606fdcc87e3207db9980d9"} Mar 13 13:14:36.944904 master-0 kubenswrapper[28149]: I0313 13:14:36.944668 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec7cd004-beaf-4191-bb66-90f3adf2c8b5" path="/var/lib/kubelet/pods/ec7cd004-beaf-4191-bb66-90f3adf2c8b5/volumes" Mar 13 13:14:36.949662 master-0 kubenswrapper[28149]: I0313 13:14:36.949603 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2558d1e1-d076-4b79-8591-d1e7ec5beaec","Type":"ContainerStarted","Data":"1f2c978d66921698e61f0efc804ddcefb2484929e4de37e25b47bfa97b66007a"} Mar 13 13:14:36.949835 master-0 kubenswrapper[28149]: I0313 13:14:36.949687 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2558d1e1-d076-4b79-8591-d1e7ec5beaec","Type":"ContainerStarted","Data":"f7cec101b047b2f8a727c0417081b0d8950c8dd051d0751f87497369cea16285"} Mar 13 13:14:37.166468 master-0 kubenswrapper[28149]: I0313 13:14:37.166377 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.166345886 podStartE2EDuration="3.166345886s" podCreationTimestamp="2026-03-13 13:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:14:37.158431653 +0000 UTC m=+1250.811896812" watchObservedRunningTime="2026-03-13 13:14:37.166345886 +0000 UTC m=+1250.819811045" Mar 13 13:14:38.399095 master-0 kubenswrapper[28149]: I0313 13:14:38.399007 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 13 13:14:40.142480 master-0 kubenswrapper[28149]: I0313 13:14:40.142425 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 13 13:14:43.400097 master-0 kubenswrapper[28149]: I0313 13:14:43.400026 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 13 13:14:43.435763 master-0 kubenswrapper[28149]: I0313 13:14:43.435710 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 13 13:14:44.308688 master-0 kubenswrapper[28149]: I0313 13:14:44.308642 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 13:14:44.326507 master-0 kubenswrapper[28149]: I0313 13:14:44.326461 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:44.349098 master-0 kubenswrapper[28149]: I0313 13:14:44.348875 28149 generic.go:334] "Generic (PLEG): container finished" podID="04c50291-fa01-44ba-8316-2cff471a4af4" containerID="ed1c12f08e73abe9a91a51f4a62cbe363977c4f51bae3cddc069f965ea1d9e8d" exitCode=137 Mar 13 13:14:44.349345 master-0 kubenswrapper[28149]: I0313 13:14:44.348945 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 13:14:44.349345 master-0 kubenswrapper[28149]: I0313 13:14:44.348959 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04c50291-fa01-44ba-8316-2cff471a4af4","Type":"ContainerDied","Data":"ed1c12f08e73abe9a91a51f4a62cbe363977c4f51bae3cddc069f965ea1d9e8d"} Mar 13 13:14:44.349489 master-0 kubenswrapper[28149]: I0313 13:14:44.349350 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04c50291-fa01-44ba-8316-2cff471a4af4","Type":"ContainerDied","Data":"3927baa110bb4a7343a1581680760c3fa9bdd2b369b7448e5e8b3be52bc829fc"} Mar 13 13:14:44.349489 master-0 kubenswrapper[28149]: I0313 13:14:44.349378 28149 scope.go:117] "RemoveContainer" containerID="ed1c12f08e73abe9a91a51f4a62cbe363977c4f51bae3cddc069f965ea1d9e8d" Mar 13 13:14:44.351487 master-0 kubenswrapper[28149]: I0313 13:14:44.351456 28149 generic.go:334] "Generic (PLEG): container finished" podID="9dfb097f-c7c9-4933-875e-ff168351b070" containerID="88ce2db7eb86df908f059314962bd0fca85fb2a4f9e3ac09caff5d81aafafac5" exitCode=137 Mar 13 13:14:44.351625 master-0 kubenswrapper[28149]: I0313 13:14:44.351604 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9dfb097f-c7c9-4933-875e-ff168351b070","Type":"ContainerDied","Data":"88ce2db7eb86df908f059314962bd0fca85fb2a4f9e3ac09caff5d81aafafac5"} Mar 13 13:14:44.351732 master-0 kubenswrapper[28149]: I0313 13:14:44.351711 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9dfb097f-c7c9-4933-875e-ff168351b070","Type":"ContainerDied","Data":"489190caa09d4e2be93b8a6fb64992761cfc56570ab6a2b805b7f3f4e90e0545"} Mar 13 13:14:44.351892 master-0 kubenswrapper[28149]: I0313 13:14:44.351799 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:44.374524 master-0 kubenswrapper[28149]: I0313 13:14:44.374483 28149 scope.go:117] "RemoveContainer" containerID="addeb280f71ed2a311858abaf3c9bd138eb75417378585b9369064f1d9ecbeed" Mar 13 13:14:44.398795 master-0 kubenswrapper[28149]: I0313 13:14:44.398694 28149 scope.go:117] "RemoveContainer" containerID="ed1c12f08e73abe9a91a51f4a62cbe363977c4f51bae3cddc069f965ea1d9e8d" Mar 13 13:14:44.399340 master-0 kubenswrapper[28149]: E0313 13:14:44.399293 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed1c12f08e73abe9a91a51f4a62cbe363977c4f51bae3cddc069f965ea1d9e8d\": container with ID starting with ed1c12f08e73abe9a91a51f4a62cbe363977c4f51bae3cddc069f965ea1d9e8d not found: ID does not exist" containerID="ed1c12f08e73abe9a91a51f4a62cbe363977c4f51bae3cddc069f965ea1d9e8d" Mar 13 13:14:44.399416 master-0 kubenswrapper[28149]: I0313 13:14:44.399333 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed1c12f08e73abe9a91a51f4a62cbe363977c4f51bae3cddc069f965ea1d9e8d"} err="failed to get container status \"ed1c12f08e73abe9a91a51f4a62cbe363977c4f51bae3cddc069f965ea1d9e8d\": rpc error: code = NotFound desc = could not find container \"ed1c12f08e73abe9a91a51f4a62cbe363977c4f51bae3cddc069f965ea1d9e8d\": container with ID starting with ed1c12f08e73abe9a91a51f4a62cbe363977c4f51bae3cddc069f965ea1d9e8d not found: ID does not exist" Mar 13 13:14:44.399416 master-0 kubenswrapper[28149]: I0313 13:14:44.399387 28149 scope.go:117] "RemoveContainer" containerID="addeb280f71ed2a311858abaf3c9bd138eb75417378585b9369064f1d9ecbeed" Mar 13 13:14:44.399752 master-0 kubenswrapper[28149]: E0313 13:14:44.399720 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"addeb280f71ed2a311858abaf3c9bd138eb75417378585b9369064f1d9ecbeed\": container with ID starting with addeb280f71ed2a311858abaf3c9bd138eb75417378585b9369064f1d9ecbeed not found: ID does not exist" containerID="addeb280f71ed2a311858abaf3c9bd138eb75417378585b9369064f1d9ecbeed" Mar 13 13:14:44.399799 master-0 kubenswrapper[28149]: I0313 13:14:44.399746 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"addeb280f71ed2a311858abaf3c9bd138eb75417378585b9369064f1d9ecbeed"} err="failed to get container status \"addeb280f71ed2a311858abaf3c9bd138eb75417378585b9369064f1d9ecbeed\": rpc error: code = NotFound desc = could not find container \"addeb280f71ed2a311858abaf3c9bd138eb75417378585b9369064f1d9ecbeed\": container with ID starting with addeb280f71ed2a311858abaf3c9bd138eb75417378585b9369064f1d9ecbeed not found: ID does not exist" Mar 13 13:14:44.399799 master-0 kubenswrapper[28149]: I0313 13:14:44.399761 28149 scope.go:117] "RemoveContainer" containerID="88ce2db7eb86df908f059314962bd0fca85fb2a4f9e3ac09caff5d81aafafac5" Mar 13 13:14:44.424957 master-0 kubenswrapper[28149]: I0313 13:14:44.424911 28149 scope.go:117] "RemoveContainer" containerID="88ce2db7eb86df908f059314962bd0fca85fb2a4f9e3ac09caff5d81aafafac5" Mar 13 13:14:44.425434 master-0 kubenswrapper[28149]: E0313 13:14:44.425396 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88ce2db7eb86df908f059314962bd0fca85fb2a4f9e3ac09caff5d81aafafac5\": container with ID starting with 88ce2db7eb86df908f059314962bd0fca85fb2a4f9e3ac09caff5d81aafafac5 not found: ID does not exist" containerID="88ce2db7eb86df908f059314962bd0fca85fb2a4f9e3ac09caff5d81aafafac5" Mar 13 13:14:44.425482 master-0 kubenswrapper[28149]: I0313 13:14:44.425435 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88ce2db7eb86df908f059314962bd0fca85fb2a4f9e3ac09caff5d81aafafac5"} err="failed to get container status \"88ce2db7eb86df908f059314962bd0fca85fb2a4f9e3ac09caff5d81aafafac5\": rpc error: code = NotFound desc = could not find container \"88ce2db7eb86df908f059314962bd0fca85fb2a4f9e3ac09caff5d81aafafac5\": container with ID starting with 88ce2db7eb86df908f059314962bd0fca85fb2a4f9e3ac09caff5d81aafafac5 not found: ID does not exist" Mar 13 13:14:44.611240 master-0 kubenswrapper[28149]: I0313 13:14:44.611028 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 13 13:14:44.702521 master-0 kubenswrapper[28149]: I0313 13:14:44.702461 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25sbs\" (UniqueName: \"kubernetes.io/projected/04c50291-fa01-44ba-8316-2cff471a4af4-kube-api-access-25sbs\") pod \"04c50291-fa01-44ba-8316-2cff471a4af4\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " Mar 13 13:14:44.702772 master-0 kubenswrapper[28149]: I0313 13:14:44.702541 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfb097f-c7c9-4933-875e-ff168351b070-combined-ca-bundle\") pod \"9dfb097f-c7c9-4933-875e-ff168351b070\" (UID: \"9dfb097f-c7c9-4933-875e-ff168351b070\") " Mar 13 13:14:44.702772 master-0 kubenswrapper[28149]: I0313 13:14:44.702589 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c50291-fa01-44ba-8316-2cff471a4af4-config-data\") pod \"04c50291-fa01-44ba-8316-2cff471a4af4\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " Mar 13 13:14:44.703274 master-0 kubenswrapper[28149]: I0313 13:14:44.703230 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04c50291-fa01-44ba-8316-2cff471a4af4-logs\") pod \"04c50291-fa01-44ba-8316-2cff471a4af4\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " Mar 13 13:14:44.703405 master-0 kubenswrapper[28149]: I0313 13:14:44.703375 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c50291-fa01-44ba-8316-2cff471a4af4-combined-ca-bundle\") pod \"04c50291-fa01-44ba-8316-2cff471a4af4\" (UID: \"04c50291-fa01-44ba-8316-2cff471a4af4\") " Mar 13 13:14:44.703468 master-0 kubenswrapper[28149]: I0313 13:14:44.703453 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dfb097f-c7c9-4933-875e-ff168351b070-config-data\") pod \"9dfb097f-c7c9-4933-875e-ff168351b070\" (UID: \"9dfb097f-c7c9-4933-875e-ff168351b070\") " Mar 13 13:14:44.703513 master-0 kubenswrapper[28149]: I0313 13:14:44.703482 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddsx8\" (UniqueName: \"kubernetes.io/projected/9dfb097f-c7c9-4933-875e-ff168351b070-kube-api-access-ddsx8\") pod \"9dfb097f-c7c9-4933-875e-ff168351b070\" (UID: \"9dfb097f-c7c9-4933-875e-ff168351b070\") " Mar 13 13:14:44.703593 master-0 kubenswrapper[28149]: I0313 13:14:44.703563 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04c50291-fa01-44ba-8316-2cff471a4af4-logs" (OuterVolumeSpecName: "logs") pod "04c50291-fa01-44ba-8316-2cff471a4af4" (UID: "04c50291-fa01-44ba-8316-2cff471a4af4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:14:44.704096 master-0 kubenswrapper[28149]: I0313 13:14:44.704073 28149 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04c50291-fa01-44ba-8316-2cff471a4af4-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:44.797250 master-0 kubenswrapper[28149]: I0313 13:14:44.797167 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c50291-fa01-44ba-8316-2cff471a4af4-kube-api-access-25sbs" (OuterVolumeSpecName: "kube-api-access-25sbs") pod "04c50291-fa01-44ba-8316-2cff471a4af4" (UID: "04c50291-fa01-44ba-8316-2cff471a4af4"). InnerVolumeSpecName "kube-api-access-25sbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:14:44.797537 master-0 kubenswrapper[28149]: I0313 13:14:44.797316 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dfb097f-c7c9-4933-875e-ff168351b070-kube-api-access-ddsx8" (OuterVolumeSpecName: "kube-api-access-ddsx8") pod "9dfb097f-c7c9-4933-875e-ff168351b070" (UID: "9dfb097f-c7c9-4933-875e-ff168351b070"). InnerVolumeSpecName "kube-api-access-ddsx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:14:44.804342 master-0 kubenswrapper[28149]: I0313 13:14:44.804196 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dfb097f-c7c9-4933-875e-ff168351b070-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dfb097f-c7c9-4933-875e-ff168351b070" (UID: "9dfb097f-c7c9-4933-875e-ff168351b070"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:14:44.807444 master-0 kubenswrapper[28149]: I0313 13:14:44.806834 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddsx8\" (UniqueName: \"kubernetes.io/projected/9dfb097f-c7c9-4933-875e-ff168351b070-kube-api-access-ddsx8\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:44.807444 master-0 kubenswrapper[28149]: I0313 13:14:44.806861 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25sbs\" (UniqueName: \"kubernetes.io/projected/04c50291-fa01-44ba-8316-2cff471a4af4-kube-api-access-25sbs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:44.807444 master-0 kubenswrapper[28149]: I0313 13:14:44.806871 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfb097f-c7c9-4933-875e-ff168351b070-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:44.807849 master-0 kubenswrapper[28149]: I0313 13:14:44.807825 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dfb097f-c7c9-4933-875e-ff168351b070-config-data" (OuterVolumeSpecName: "config-data") pod "9dfb097f-c7c9-4933-875e-ff168351b070" (UID: "9dfb097f-c7c9-4933-875e-ff168351b070"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:14:44.808399 master-0 kubenswrapper[28149]: I0313 13:14:44.808342 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04c50291-fa01-44ba-8316-2cff471a4af4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04c50291-fa01-44ba-8316-2cff471a4af4" (UID: "04c50291-fa01-44ba-8316-2cff471a4af4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:14:44.810923 master-0 kubenswrapper[28149]: I0313 13:14:44.810867 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04c50291-fa01-44ba-8316-2cff471a4af4-config-data" (OuterVolumeSpecName: "config-data") pod "04c50291-fa01-44ba-8316-2cff471a4af4" (UID: "04c50291-fa01-44ba-8316-2cff471a4af4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:14:44.909185 master-0 kubenswrapper[28149]: I0313 13:14:44.909005 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c50291-fa01-44ba-8316-2cff471a4af4-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:44.909185 master-0 kubenswrapper[28149]: I0313 13:14:44.909051 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c50291-fa01-44ba-8316-2cff471a4af4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:44.909185 master-0 kubenswrapper[28149]: I0313 13:14:44.909063 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dfb097f-c7c9-4933-875e-ff168351b070-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:14:45.148976 master-0 kubenswrapper[28149]: I0313 13:14:45.148914 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 13:14:45.149254 master-0 kubenswrapper[28149]: I0313 13:14:45.148991 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 13:14:46.632970 master-0 kubenswrapper[28149]: I0313 13:14:46.631486 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2558d1e1-d076-4b79-8591-d1e7ec5beaec" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.11:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:14:46.674356 master-0 kubenswrapper[28149]: I0313 13:14:46.673260 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2558d1e1-d076-4b79-8591-d1e7ec5beaec" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.11:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:14:46.770358 master-0 kubenswrapper[28149]: I0313 13:14:46.770303 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 13:14:46.770358 master-0 kubenswrapper[28149]: I0313 13:14:46.770358 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 13:14:46.770891 master-0 kubenswrapper[28149]: I0313 13:14:46.770377 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 13:14:46.787711 master-0 kubenswrapper[28149]: I0313 13:14:46.784873 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 13 13:14:46.787711 master-0 kubenswrapper[28149]: E0313 13:14:46.785447 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c50291-fa01-44ba-8316-2cff471a4af4" containerName="nova-metadata-log" Mar 13 13:14:46.787711 master-0 kubenswrapper[28149]: I0313 13:14:46.785461 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c50291-fa01-44ba-8316-2cff471a4af4" containerName="nova-metadata-log" Mar 13 13:14:46.787711 master-0 kubenswrapper[28149]: E0313 13:14:46.785510 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c50291-fa01-44ba-8316-2cff471a4af4" containerName="nova-metadata-metadata" Mar 13 13:14:46.787711 master-0 kubenswrapper[28149]: I0313 13:14:46.785516 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c50291-fa01-44ba-8316-2cff471a4af4" containerName="nova-metadata-metadata" Mar 13 13:14:46.787711 master-0 kubenswrapper[28149]: E0313 13:14:46.785536 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dfb097f-c7c9-4933-875e-ff168351b070" containerName="nova-cell1-novncproxy-novncproxy" Mar 13 13:14:46.787711 master-0 kubenswrapper[28149]: I0313 13:14:46.785542 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dfb097f-c7c9-4933-875e-ff168351b070" containerName="nova-cell1-novncproxy-novncproxy" Mar 13 13:14:46.787711 master-0 kubenswrapper[28149]: I0313 13:14:46.785829 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="04c50291-fa01-44ba-8316-2cff471a4af4" containerName="nova-metadata-metadata" Mar 13 13:14:46.787711 master-0 kubenswrapper[28149]: I0313 13:14:46.785866 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dfb097f-c7c9-4933-875e-ff168351b070" containerName="nova-cell1-novncproxy-novncproxy" Mar 13 13:14:46.787711 master-0 kubenswrapper[28149]: I0313 13:14:46.785879 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="04c50291-fa01-44ba-8316-2cff471a4af4" containerName="nova-metadata-log" Mar 13 13:14:46.787711 master-0 kubenswrapper[28149]: I0313 13:14:46.787266 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 13:14:46.797060 master-0 kubenswrapper[28149]: I0313 13:14:46.797000 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 13 13:14:46.797060 master-0 kubenswrapper[28149]: I0313 13:14:46.797049 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 13 13:14:46.826652 master-0 kubenswrapper[28149]: I0313 13:14:46.826593 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 13:14:46.842424 master-0 kubenswrapper[28149]: I0313 13:14:46.842363 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 13:14:46.856945 master-0 kubenswrapper[28149]: I0313 13:14:46.856892 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 13:14:46.862520 master-0 kubenswrapper[28149]: I0313 13:14:46.862058 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:46.865175 master-0 kubenswrapper[28149]: I0313 13:14:46.864759 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 13 13:14:46.865175 master-0 kubenswrapper[28149]: I0313 13:14:46.864820 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 13 13:14:46.865175 master-0 kubenswrapper[28149]: I0313 13:14:46.865000 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 13 13:14:46.883213 master-0 kubenswrapper[28149]: I0313 13:14:46.883035 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 13:14:46.907645 master-0 kubenswrapper[28149]: I0313 13:14:46.907578 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxhh4\" (UniqueName: \"kubernetes.io/projected/e5d9b3aa-60cb-497f-89df-a56cdae0f455-kube-api-access-mxhh4\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5d9b3aa-60cb-497f-89df-a56cdae0f455\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:46.907962 master-0 kubenswrapper[28149]: I0313 13:14:46.907669 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " pod="openstack/nova-metadata-0" Mar 13 13:14:46.907962 master-0 kubenswrapper[28149]: I0313 13:14:46.907703 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " pod="openstack/nova-metadata-0" Mar 13 13:14:46.907962 master-0 kubenswrapper[28149]: I0313 13:14:46.907796 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fvgl\" (UniqueName: \"kubernetes.io/projected/e579030e-e1cd-4dee-8a65-0e7a9b636974-kube-api-access-6fvgl\") pod \"nova-metadata-0\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " pod="openstack/nova-metadata-0" Mar 13 13:14:46.907962 master-0 kubenswrapper[28149]: I0313 13:14:46.907840 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d9b3aa-60cb-497f-89df-a56cdae0f455-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5d9b3aa-60cb-497f-89df-a56cdae0f455\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:46.907962 master-0 kubenswrapper[28149]: I0313 13:14:46.907889 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5d9b3aa-60cb-497f-89df-a56cdae0f455-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5d9b3aa-60cb-497f-89df-a56cdae0f455\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:46.907962 master-0 kubenswrapper[28149]: I0313 13:14:46.907932 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-config-data\") pod \"nova-metadata-0\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " pod="openstack/nova-metadata-0" Mar 13 13:14:46.908289 master-0 kubenswrapper[28149]: I0313 13:14:46.908032 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e579030e-e1cd-4dee-8a65-0e7a9b636974-logs\") pod \"nova-metadata-0\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " pod="openstack/nova-metadata-0" Mar 13 13:14:46.908289 master-0 kubenswrapper[28149]: I0313 13:14:46.908067 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5d9b3aa-60cb-497f-89df-a56cdae0f455-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5d9b3aa-60cb-497f-89df-a56cdae0f455\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:46.908289 master-0 kubenswrapper[28149]: I0313 13:14:46.908154 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5d9b3aa-60cb-497f-89df-a56cdae0f455-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5d9b3aa-60cb-497f-89df-a56cdae0f455\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:47.258810 master-0 kubenswrapper[28149]: I0313 13:14:47.258375 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fvgl\" (UniqueName: \"kubernetes.io/projected/e579030e-e1cd-4dee-8a65-0e7a9b636974-kube-api-access-6fvgl\") pod \"nova-metadata-0\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " pod="openstack/nova-metadata-0" Mar 13 13:14:47.258810 master-0 kubenswrapper[28149]: I0313 13:14:47.258444 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d9b3aa-60cb-497f-89df-a56cdae0f455-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5d9b3aa-60cb-497f-89df-a56cdae0f455\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:47.258810 master-0 kubenswrapper[28149]: I0313 13:14:47.258493 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5d9b3aa-60cb-497f-89df-a56cdae0f455-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5d9b3aa-60cb-497f-89df-a56cdae0f455\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:47.258810 master-0 kubenswrapper[28149]: I0313 13:14:47.258525 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-config-data\") pod \"nova-metadata-0\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " pod="openstack/nova-metadata-0" Mar 13 13:14:47.258810 master-0 kubenswrapper[28149]: I0313 13:14:47.258664 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e579030e-e1cd-4dee-8a65-0e7a9b636974-logs\") pod \"nova-metadata-0\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " pod="openstack/nova-metadata-0" Mar 13 13:14:47.258810 master-0 kubenswrapper[28149]: I0313 13:14:47.258697 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5d9b3aa-60cb-497f-89df-a56cdae0f455-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5d9b3aa-60cb-497f-89df-a56cdae0f455\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:47.258810 master-0 kubenswrapper[28149]: I0313 13:14:47.258784 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5d9b3aa-60cb-497f-89df-a56cdae0f455-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5d9b3aa-60cb-497f-89df-a56cdae0f455\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:47.259350 master-0 kubenswrapper[28149]: I0313 13:14:47.258859 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxhh4\" (UniqueName: \"kubernetes.io/projected/e5d9b3aa-60cb-497f-89df-a56cdae0f455-kube-api-access-mxhh4\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5d9b3aa-60cb-497f-89df-a56cdae0f455\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:47.259350 master-0 kubenswrapper[28149]: I0313 13:14:47.258908 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " pod="openstack/nova-metadata-0" Mar 13 13:14:47.259350 master-0 kubenswrapper[28149]: I0313 13:14:47.258938 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " pod="openstack/nova-metadata-0" Mar 13 13:14:47.264243 master-0 kubenswrapper[28149]: I0313 13:14:47.264117 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " pod="openstack/nova-metadata-0" Mar 13 13:14:47.282366 master-0 kubenswrapper[28149]: I0313 13:14:47.282309 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5d9b3aa-60cb-497f-89df-a56cdae0f455-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5d9b3aa-60cb-497f-89df-a56cdae0f455\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:47.283351 master-0 kubenswrapper[28149]: I0313 13:14:47.283311 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d9b3aa-60cb-497f-89df-a56cdae0f455-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5d9b3aa-60cb-497f-89df-a56cdae0f455\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:47.285371 master-0 kubenswrapper[28149]: I0313 13:14:47.285315 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5d9b3aa-60cb-497f-89df-a56cdae0f455-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5d9b3aa-60cb-497f-89df-a56cdae0f455\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:47.287230 master-0 kubenswrapper[28149]: I0313 13:14:47.287190 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5d9b3aa-60cb-497f-89df-a56cdae0f455-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5d9b3aa-60cb-497f-89df-a56cdae0f455\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:47.287577 master-0 kubenswrapper[28149]: I0313 13:14:47.287535 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e579030e-e1cd-4dee-8a65-0e7a9b636974-logs\") pod \"nova-metadata-0\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " pod="openstack/nova-metadata-0" Mar 13 13:14:47.288504 master-0 kubenswrapper[28149]: I0313 13:14:47.288447 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-config-data\") pod \"nova-metadata-0\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " pod="openstack/nova-metadata-0" Mar 13 13:14:47.304636 master-0 kubenswrapper[28149]: I0313 13:14:47.299596 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fvgl\" (UniqueName: \"kubernetes.io/projected/e579030e-e1cd-4dee-8a65-0e7a9b636974-kube-api-access-6fvgl\") pod \"nova-metadata-0\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " pod="openstack/nova-metadata-0" Mar 13 13:14:47.304636 master-0 kubenswrapper[28149]: I0313 13:14:47.299757 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " pod="openstack/nova-metadata-0" Mar 13 13:14:47.324162 master-0 kubenswrapper[28149]: I0313 13:14:47.321935 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxhh4\" (UniqueName: \"kubernetes.io/projected/e5d9b3aa-60cb-497f-89df-a56cdae0f455-kube-api-access-mxhh4\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5d9b3aa-60cb-497f-89df-a56cdae0f455\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:47.426162 master-0 kubenswrapper[28149]: I0313 13:14:47.425681 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 13:14:47.484167 master-0 kubenswrapper[28149]: I0313 13:14:47.482289 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:48.021922 master-0 kubenswrapper[28149]: I0313 13:14:48.021866 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 13:14:48.138473 master-0 kubenswrapper[28149]: I0313 13:14:48.138277 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 13:14:48.795510 master-0 kubenswrapper[28149]: I0313 13:14:48.795422 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04c50291-fa01-44ba-8316-2cff471a4af4" path="/var/lib/kubelet/pods/04c50291-fa01-44ba-8316-2cff471a4af4/volumes" Mar 13 13:14:48.796998 master-0 kubenswrapper[28149]: I0313 13:14:48.796955 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dfb097f-c7c9-4933-875e-ff168351b070" path="/var/lib/kubelet/pods/9dfb097f-c7c9-4933-875e-ff168351b070/volumes" Mar 13 13:14:48.877874 master-0 kubenswrapper[28149]: I0313 13:14:48.877740 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e579030e-e1cd-4dee-8a65-0e7a9b636974","Type":"ContainerStarted","Data":"e9c29f4ebf8c74e4fc60a77a1033351cbf9270544bef85bc6feb7f5be1c245b6"} Mar 13 13:14:48.878234 master-0 kubenswrapper[28149]: I0313 13:14:48.878211 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e579030e-e1cd-4dee-8a65-0e7a9b636974","Type":"ContainerStarted","Data":"c20ffe658c4584dea980e7b403e0db881628da95d1a6b6134f2638269d1ff467"} Mar 13 13:14:48.881014 master-0 kubenswrapper[28149]: I0313 13:14:48.880931 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e5d9b3aa-60cb-497f-89df-a56cdae0f455","Type":"ContainerStarted","Data":"1aa7090b1eb88cfc11f8a4d3bc5c031b6881ceb5693a2790ef56496f3a7d03a3"} Mar 13 13:14:49.903356 master-0 kubenswrapper[28149]: I0313 13:14:49.903273 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e5d9b3aa-60cb-497f-89df-a56cdae0f455","Type":"ContainerStarted","Data":"5ac91467b8c6298066b9002232c7978c292abc509ab6582262891bfa093de07d"} Mar 13 13:14:49.908467 master-0 kubenswrapper[28149]: I0313 13:14:49.906886 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e579030e-e1cd-4dee-8a65-0e7a9b636974","Type":"ContainerStarted","Data":"568dd59e04dd1335822f79eed749066323b9ec02f7b4c6056b3dbd19d0faddd8"} Mar 13 13:14:50.027157 master-0 kubenswrapper[28149]: I0313 13:14:50.027045 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.026999343 podStartE2EDuration="4.026999343s" podCreationTimestamp="2026-03-13 13:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:14:50.010237861 +0000 UTC m=+1263.663703040" watchObservedRunningTime="2026-03-13 13:14:50.026999343 +0000 UTC m=+1263.680464502" Mar 13 13:14:50.100432 master-0 kubenswrapper[28149]: I0313 13:14:50.099910 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.099883226 podStartE2EDuration="4.099883226s" podCreationTimestamp="2026-03-13 13:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:14:50.097887063 +0000 UTC m=+1263.751352222" watchObservedRunningTime="2026-03-13 13:14:50.099883226 +0000 UTC m=+1263.753348385" Mar 13 13:14:51.939947 master-0 kubenswrapper[28149]: I0313 13:14:51.939866 28149 generic.go:334] "Generic (PLEG): container finished" podID="8fdaa161-cf3d-465a-8e70-c2af73f96711" containerID="76d8cdffdd21a95fb826b5a60cfc58f6f03d06eb91012ff4400ed7101ec56c05" exitCode=0 Mar 13 13:14:51.939947 master-0 kubenswrapper[28149]: I0313 13:14:51.939930 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8fdaa161-cf3d-465a-8e70-c2af73f96711","Type":"ContainerDied","Data":"76d8cdffdd21a95fb826b5a60cfc58f6f03d06eb91012ff4400ed7101ec56c05"} Mar 13 13:14:52.491799 master-0 kubenswrapper[28149]: I0313 13:14:52.488291 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 13:14:52.491799 master-0 kubenswrapper[28149]: I0313 13:14:52.489213 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 13:14:52.491799 master-0 kubenswrapper[28149]: I0313 13:14:52.490177 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:53.115431 master-0 kubenswrapper[28149]: I0313 13:14:53.114271 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8fdaa161-cf3d-465a-8e70-c2af73f96711","Type":"ContainerStarted","Data":"e94189750c1d1bc1e705d3b69c4e857d46a98f4c4f9ab639a0f21f0a6fd80b31"} Mar 13 13:14:54.307384 master-0 kubenswrapper[28149]: I0313 13:14:54.307326 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8fdaa161-cf3d-465a-8e70-c2af73f96711","Type":"ContainerStarted","Data":"ce790c617caf16fbf4d1ec2f71b07d53cacfa874c86b4694b3bd904d99f7649f"} Mar 13 13:14:54.307384 master-0 kubenswrapper[28149]: I0313 13:14:54.307379 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8fdaa161-cf3d-465a-8e70-c2af73f96711","Type":"ContainerStarted","Data":"705c850330e3153f80d8c0fd3f3ad45d3f4636fdd21c067002e3600bf3ef1467"} Mar 13 13:14:54.309015 master-0 kubenswrapper[28149]: I0313 13:14:54.308956 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 13 13:14:54.309015 master-0 kubenswrapper[28149]: I0313 13:14:54.309011 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 13 13:14:54.623728 master-0 kubenswrapper[28149]: I0313 13:14:54.623565 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Mar 13 13:14:55.152768 master-0 kubenswrapper[28149]: I0313 13:14:55.152718 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 13:14:55.154076 master-0 kubenswrapper[28149]: I0313 13:14:55.154052 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 13:14:55.154430 master-0 kubenswrapper[28149]: I0313 13:14:55.154412 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 13:14:55.158264 master-0 kubenswrapper[28149]: I0313 13:14:55.158229 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 13:14:55.180703 master-0 kubenswrapper[28149]: I0313 13:14:55.180610 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=122.352930758 podStartE2EDuration="3m2.180571098s" podCreationTimestamp="2026-03-13 13:11:53 +0000 UTC" firstStartedPulling="2026-03-13 13:12:16.766698875 +0000 UTC m=+1110.420164024" lastFinishedPulling="2026-03-13 13:13:16.594339205 +0000 UTC m=+1170.247804364" observedRunningTime="2026-03-13 13:14:54.362796141 +0000 UTC m=+1268.016261310" watchObservedRunningTime="2026-03-13 13:14:55.180571098 +0000 UTC m=+1268.834036267" Mar 13 13:14:55.322812 master-0 kubenswrapper[28149]: I0313 13:14:55.322745 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 13:14:55.327546 master-0 kubenswrapper[28149]: I0313 13:14:55.327109 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 13:14:55.673165 master-0 kubenswrapper[28149]: I0313 13:14:55.667920 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58fdc6f86c-dr4ls"] Mar 13 13:14:55.675976 master-0 kubenswrapper[28149]: I0313 13:14:55.675929 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.687292 master-0 kubenswrapper[28149]: I0313 13:14:55.683336 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58fdc6f86c-dr4ls"] Mar 13 13:14:55.731912 master-0 kubenswrapper[28149]: I0313 13:14:55.731836 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-dns-svc\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.732644 master-0 kubenswrapper[28149]: I0313 13:14:55.732616 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-config\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.732948 master-0 kubenswrapper[28149]: I0313 13:14:55.732924 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-ovsdbserver-sb\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.733081 master-0 kubenswrapper[28149]: I0313 13:14:55.733061 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-ovsdbserver-nb\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.733263 master-0 kubenswrapper[28149]: I0313 13:14:55.733240 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh48g\" (UniqueName: \"kubernetes.io/projected/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-kube-api-access-nh48g\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.733525 master-0 kubenswrapper[28149]: I0313 13:14:55.733503 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-dns-swift-storage-0\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.883265 master-0 kubenswrapper[28149]: I0313 13:14:55.883218 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-dns-svc\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.883793 master-0 kubenswrapper[28149]: I0313 13:14:55.883773 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-config\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.883915 master-0 kubenswrapper[28149]: I0313 13:14:55.883897 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-ovsdbserver-sb\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.884232 master-0 kubenswrapper[28149]: I0313 13:14:55.884217 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-ovsdbserver-nb\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.884337 master-0 kubenswrapper[28149]: I0313 13:14:55.884321 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh48g\" (UniqueName: \"kubernetes.io/projected/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-kube-api-access-nh48g\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.884498 master-0 kubenswrapper[28149]: I0313 13:14:55.884484 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-dns-swift-storage-0\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.885032 master-0 kubenswrapper[28149]: I0313 13:14:55.884992 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-dns-svc\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.886000 master-0 kubenswrapper[28149]: I0313 13:14:55.885980 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-dns-swift-storage-0\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.886105 master-0 kubenswrapper[28149]: I0313 13:14:55.885982 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-config\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.897098 master-0 kubenswrapper[28149]: I0313 13:14:55.897055 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-ovsdbserver-nb\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.897320 master-0 kubenswrapper[28149]: I0313 13:14:55.897252 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-ovsdbserver-sb\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:55.916213 master-0 kubenswrapper[28149]: I0313 13:14:55.916110 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh48g\" (UniqueName: \"kubernetes.io/projected/bda8430e-9baa-40e1-9fac-1c7ccb4767e1-kube-api-access-nh48g\") pod \"dnsmasq-dns-58fdc6f86c-dr4ls\" (UID: \"bda8430e-9baa-40e1-9fac-1c7ccb4767e1\") " pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:56.020107 master-0 kubenswrapper[28149]: I0313 13:14:56.020046 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:56.419550 master-0 kubenswrapper[28149]: I0313 13:14:56.419485 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 13 13:14:56.654168 master-0 kubenswrapper[28149]: I0313 13:14:56.650681 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58fdc6f86c-dr4ls"] Mar 13 13:14:56.893474 master-0 kubenswrapper[28149]: I0313 13:14:56.892386 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-conductor-0" podUID="8fdaa161-cf3d-465a-8e70-c2af73f96711" containerName="ironic-conductor" probeResult="failure" output=< Mar 13 13:14:56.893474 master-0 kubenswrapper[28149]: ironic-conductor-0 is offline Mar 13 13:14:56.893474 master-0 kubenswrapper[28149]: > Mar 13 13:14:57.430913 master-0 kubenswrapper[28149]: I0313 13:14:57.427302 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 13:14:57.430913 master-0 kubenswrapper[28149]: I0313 13:14:57.427349 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 13:14:57.437939 master-0 kubenswrapper[28149]: I0313 13:14:57.437808 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" event={"ID":"bda8430e-9baa-40e1-9fac-1c7ccb4767e1","Type":"ContainerStarted","Data":"6808f5e15c967a26075140e83106da0553f641804abb0bb9c7660df506c81ee6"} Mar 13 13:14:57.487166 master-0 kubenswrapper[28149]: I0313 13:14:57.485513 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:57.554809 master-0 kubenswrapper[28149]: I0313 13:14:57.551441 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:58.615574 master-0 kubenswrapper[28149]: I0313 13:14:58.605534 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:14:58.615574 master-0 kubenswrapper[28149]: I0313 13:14:58.614345 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 13:14:58.663380 master-0 kubenswrapper[28149]: I0313 13:14:58.655435 28149 generic.go:334] "Generic (PLEG): container finished" podID="bda8430e-9baa-40e1-9fac-1c7ccb4767e1" containerID="0daa33b4555f730f86f340de528dc03739db13ab93bc9ec761753d10746e41a2" exitCode=0 Mar 13 13:14:58.663380 master-0 kubenswrapper[28149]: I0313 13:14:58.657597 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" event={"ID":"bda8430e-9baa-40e1-9fac-1c7ccb4767e1","Type":"ContainerDied","Data":"0daa33b4555f730f86f340de528dc03739db13ab93bc9ec761753d10746e41a2"} Mar 13 13:14:58.762339 master-0 kubenswrapper[28149]: I0313 13:14:58.760046 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 13 13:14:59.797437 master-0 kubenswrapper[28149]: I0313 13:14:59.797155 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-k24lw"] Mar 13 13:14:59.806673 master-0 kubenswrapper[28149]: I0313 13:14:59.800350 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:14:59.822463 master-0 kubenswrapper[28149]: I0313 13:14:59.822041 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 13 13:14:59.822738 master-0 kubenswrapper[28149]: I0313 13:14:59.822507 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 13 13:14:59.881183 master-0 kubenswrapper[28149]: I0313 13:14:59.878542 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" event={"ID":"bda8430e-9baa-40e1-9fac-1c7ccb4767e1","Type":"ContainerStarted","Data":"5d5a80b2e9c01ad87ecfffa73f0a752e1630ceb09c03416479d6752447cf5d72"} Mar 13 13:14:59.881183 master-0 kubenswrapper[28149]: I0313 13:14:59.878733 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:14:59.902193 master-0 kubenswrapper[28149]: I0313 13:14:59.899828 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlhhh\" (UniqueName: \"kubernetes.io/projected/7226c8ae-d652-4f12-a915-9f15c50d5631-kube-api-access-dlhhh\") pod \"nova-cell1-cell-mapping-k24lw\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:14:59.902193 master-0 kubenswrapper[28149]: I0313 13:14:59.899924 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-scripts\") pod \"nova-cell1-cell-mapping-k24lw\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:14:59.902193 master-0 kubenswrapper[28149]: I0313 13:14:59.900195 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-config-data\") pod \"nova-cell1-cell-mapping-k24lw\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:14:59.902193 master-0 kubenswrapper[28149]: I0313 13:14:59.900315 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k24lw\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:14:59.953182 master-0 kubenswrapper[28149]: I0313 13:14:59.951232 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k24lw"] Mar 13 13:15:00.014184 master-0 kubenswrapper[28149]: I0313 13:15:00.008739 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlhhh\" (UniqueName: \"kubernetes.io/projected/7226c8ae-d652-4f12-a915-9f15c50d5631-kube-api-access-dlhhh\") pod \"nova-cell1-cell-mapping-k24lw\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:15:00.014184 master-0 kubenswrapper[28149]: I0313 13:15:00.008818 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-scripts\") pod \"nova-cell1-cell-mapping-k24lw\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:15:00.014184 master-0 kubenswrapper[28149]: I0313 13:15:00.008938 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-config-data\") pod \"nova-cell1-cell-mapping-k24lw\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:15:00.014184 master-0 kubenswrapper[28149]: I0313 13:15:00.009021 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k24lw\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:15:00.057270 master-0 kubenswrapper[28149]: I0313 13:15:00.042678 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k24lw\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:15:00.085166 master-0 kubenswrapper[28149]: I0313 13:15:00.067365 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-x9gz7"] Mar 13 13:15:00.085166 master-0 kubenswrapper[28149]: I0313 13:15:00.072935 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-scripts\") pod \"nova-cell1-cell-mapping-k24lw\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:15:00.482956 master-0 kubenswrapper[28149]: I0313 13:15:00.482746 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlhhh\" (UniqueName: \"kubernetes.io/projected/7226c8ae-d652-4f12-a915-9f15c50d5631-kube-api-access-dlhhh\") pod \"nova-cell1-cell-mapping-k24lw\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:15:00.496788 master-0 kubenswrapper[28149]: I0313 13:15:00.494996 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-config-data\") pod \"nova-cell1-cell-mapping-k24lw\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:15:00.498534 master-0 kubenswrapper[28149]: I0313 13:15:00.497993 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:15:00.506184 master-0 kubenswrapper[28149]: I0313 13:15:00.501378 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:00.573181 master-0 kubenswrapper[28149]: I0313 13:15:00.569297 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-x9gz7"] Mar 13 13:15:00.600173 master-0 kubenswrapper[28149]: I0313 13:15:00.598785 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" podStartSLOduration=5.598740943 podStartE2EDuration="5.598740943s" podCreationTimestamp="2026-03-13 13:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:14:59.99317912 +0000 UTC m=+1273.646644299" watchObservedRunningTime="2026-03-13 13:15:00.598740943 +0000 UTC m=+1274.252206102" Mar 13 13:15:00.632188 master-0 kubenswrapper[28149]: I0313 13:15:00.631921 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-scripts\") pod \"nova-cell1-host-discover-x9gz7\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:00.632490 master-0 kubenswrapper[28149]: I0313 13:15:00.632184 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-combined-ca-bundle\") pod \"nova-cell1-host-discover-x9gz7\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:00.632490 master-0 kubenswrapper[28149]: I0313 13:15:00.632405 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2c29\" (UniqueName: \"kubernetes.io/projected/305404ca-1b24-429d-853c-ec1a49b101f0-kube-api-access-w2c29\") pod \"nova-cell1-host-discover-x9gz7\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:00.632490 master-0 kubenswrapper[28149]: I0313 13:15:00.632439 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-config-data\") pod \"nova-cell1-host-discover-x9gz7\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:00.915177 master-0 kubenswrapper[28149]: I0313 13:15:00.913883 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-combined-ca-bundle\") pod \"nova-cell1-host-discover-x9gz7\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:00.915177 master-0 kubenswrapper[28149]: I0313 13:15:00.914019 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2c29\" (UniqueName: \"kubernetes.io/projected/305404ca-1b24-429d-853c-ec1a49b101f0-kube-api-access-w2c29\") pod \"nova-cell1-host-discover-x9gz7\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:00.915177 master-0 kubenswrapper[28149]: I0313 13:15:00.914042 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-config-data\") pod \"nova-cell1-host-discover-x9gz7\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:00.915177 master-0 kubenswrapper[28149]: I0313 13:15:00.914163 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-scripts\") pod \"nova-cell1-host-discover-x9gz7\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:01.861933 master-0 kubenswrapper[28149]: I0313 13:15:00.928550 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-scripts\") pod \"nova-cell1-host-discover-x9gz7\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:01.861933 master-0 kubenswrapper[28149]: I0313 13:15:00.940982 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-combined-ca-bundle\") pod \"nova-cell1-host-discover-x9gz7\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:01.861933 master-0 kubenswrapper[28149]: I0313 13:15:00.941836 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-config-data\") pod \"nova-cell1-host-discover-x9gz7\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:01.861933 master-0 kubenswrapper[28149]: I0313 13:15:00.968556 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2c29\" (UniqueName: \"kubernetes.io/projected/305404ca-1b24-429d-853c-ec1a49b101f0-kube-api-access-w2c29\") pod \"nova-cell1-host-discover-x9gz7\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:02.171471 master-0 kubenswrapper[28149]: E0313 13:15:02.170168 28149 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.483s" Mar 13 13:15:02.183626 master-0 kubenswrapper[28149]: I0313 13:15:02.183569 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k24lw"] Mar 13 13:15:02.283165 master-0 kubenswrapper[28149]: I0313 13:15:02.280652 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:02.382770 master-0 kubenswrapper[28149]: I0313 13:15:02.382735 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Mar 13 13:15:02.388345 master-0 kubenswrapper[28149]: I0313 13:15:02.388313 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 13 13:15:06.218065 master-0 kubenswrapper[28149]: I0313 13:15:03.872320 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-4qdjn" podUID="5a9ed8da-031e-4009-b7aa-c1dd970911c6" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.149:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.218065 master-0 kubenswrapper[28149]: I0313 13:15:03.872790 28149 patch_prober.go:28] interesting pod/metrics-server-84b66c585b-f7g5r container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.128.0.89:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 13:15:06.218065 master-0 kubenswrapper[28149]: I0313 13:15:03.872834 28149 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" podUID="a529d528-3bd9-4512-9ae8-8284329c9c4c" containerName="metrics-server" probeResult="failure" output="Get \"https://10.128.0.89:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.413345 master-0 kubenswrapper[28149]: I0313 13:15:06.413168 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-5z8g2" podUID="b0caec54-e9db-4ace-8b0d-aebafbb6608b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.153:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.414178 master-0 kubenswrapper[28149]: I0313 13:15:06.414033 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-677c674df7-zj78q" podUID="9a01f1d0-3f33-41a0-be76-39ce52e88fab" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.159:8081/readyz\": dial tcp 10.128.0.159:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.414576 master-0 kubenswrapper[28149]: I0313 13:15:06.414497 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-c55zw" podUID="844a5475-8fda-433c-b083-26608607b8bb" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.162:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.414812 master-0 kubenswrapper[28149]: I0313 13:15:06.414725 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-df58d" podUID="cc4c1517-f5c9-4e2e-9659-e1ad6ce7f4de" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.160:8081/readyz\": dial tcp 10.128.0.160:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.414866 master-0 kubenswrapper[28149]: I0313 13:15:06.414807 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-thm6q" podUID="bee4fa71-7893-41d2-8512-5d26c6da9913" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.150:8081/readyz\": dial tcp 10.128.0.150:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.414938 master-0 kubenswrapper[28149]: I0313 13:15:06.414892 28149 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-tc4ht container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: i/o timeout" start-of-body= Mar 13 13:15:06.415027 master-0 kubenswrapper[28149]: I0313 13:15:06.414933 28149 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-tc4ht" podUID="d11f8baa-6e8e-4ac0-9b23-1c44efd0ab2a" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: i/o timeout" Mar 13 13:15:06.415154 master-0 kubenswrapper[28149]: I0313 13:15:06.415091 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-7nr7s" podUID="cb86bcb9-ed8a-4046-99ed-8c9963f4af4d" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.158:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.415252 master-0 kubenswrapper[28149]: I0313 13:15:06.415230 28149 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 13:15:06.415332 master-0 kubenswrapper[28149]: I0313 13:15:06.415257 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.423728 master-0 kubenswrapper[28149]: I0313 13:15:06.421336 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-cjfg7" podUID="3f835be3-b114-4593-af89-119b729df40a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.151:8081/readyz\": dial tcp 10.128.0.151:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.423728 master-0 kubenswrapper[28149]: I0313 13:15:06.421534 28149 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 13:15:06.423728 master-0 kubenswrapper[28149]: I0313 13:15:06.421615 28149 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.423728 master-0 kubenswrapper[28149]: I0313 13:15:06.421476 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-g962k" podUID="033c7536-1e30-42bc-b7be-c5755276a8aa" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.152:8081/readyz\": dial tcp 10.128.0.152:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.428065 master-0 kubenswrapper[28149]: I0313 13:15:06.424822 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-4ns7k" podUID="c0d9cf57-a057-4dd4-9d4c-d292fbcdc501" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.154:8081/readyz\": dial tcp 10.128.0.154:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.428065 master-0 kubenswrapper[28149]: I0313 13:15:06.424985 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7djhg" podUID="cb979094-d28c-477a-a8c8-91d4b8eb946c" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.161:8081/readyz\": dial tcp 10.128.0.161:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.428065 master-0 kubenswrapper[28149]: I0313 13:15:06.425059 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-dnttw" podUID="abc2aa99-ac15-433b-b478-711da24b8dbf" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.157:8081/readyz\": dial tcp 10.128.0.157:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.428065 master-0 kubenswrapper[28149]: I0313 13:15:06.425265 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-qw5dr" podUID="8418da33-bbf7-4930-8e12-07bc1172da01" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.155:8081/readyz\": dial tcp 10.128.0.155:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.428065 master-0 kubenswrapper[28149]: I0313 13:15:06.425383 28149 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-wtf6j container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 13:15:06.428065 master-0 kubenswrapper[28149]: I0313 13:15:06.425409 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-79f8cd6fdd-wtf6j" podUID="45925a5e-41ae-4c19-b586-3151c7677612" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.428065 master-0 kubenswrapper[28149]: I0313 13:15:06.425456 28149 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 13:15:06.428065 master-0 kubenswrapper[28149]: I0313 13:15:06.425475 28149 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.428065 master-0 kubenswrapper[28149]: I0313 13:15:06.425693 28149 patch_prober.go:28] interesting pod/metrics-server-84b66c585b-f7g5r container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.89:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 13:15:06.428065 master-0 kubenswrapper[28149]: I0313 13:15:06.425749 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-84b66c585b-f7g5r" podUID="a529d528-3bd9-4512-9ae8-8284329c9c4c" containerName="metrics-server" probeResult="failure" output="Get \"https://10.128.0.89:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.910161 master-0 kubenswrapper[28149]: I0313 13:15:06.907773 28149 patch_prober.go:28] interesting pod/dns-default-m7k6m container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.128.0.45:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 13:15:06.910161 master-0 kubenswrapper[28149]: I0313 13:15:06.907861 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-m7k6m" podUID="ef42b65e-2d92-46ac-baaf-30e213787781" containerName="dns" probeResult="failure" output="Get \"http://10.128.0.45:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:06.938164 master-0 kubenswrapper[28149]: I0313 13:15:06.934329 28149 trace.go:236] Trace[152247984]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (13-Mar-2026 13:15:03.485) (total time: 3448ms): Mar 13 13:15:06.938164 master-0 kubenswrapper[28149]: Trace[152247984]: [3.448466589s] [3.448466589s] END Mar 13 13:15:07.306247 master-0 kubenswrapper[28149]: I0313 13:15:07.295663 28149 trace.go:236] Trace[612146604]: "Calculate volume metrics of glance for pod openstack/glance-e6fbd-default-internal-api-0" (13-Mar-2026 13:15:03.912) (total time: 3383ms): Mar 13 13:15:07.306247 master-0 kubenswrapper[28149]: Trace[612146604]: [3.383513949s] [3.383513949s] END Mar 13 13:15:07.348226 master-0 kubenswrapper[28149]: E0313 13:15:07.346813 28149 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.013s" Mar 13 13:15:07.348226 master-0 kubenswrapper[28149]: I0313 13:15:07.347215 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58fdc6f86c-dr4ls" Mar 13 13:15:07.414810 master-0 kubenswrapper[28149]: I0313 13:15:07.414724 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k24lw" event={"ID":"7226c8ae-d652-4f12-a915-9f15c50d5631","Type":"ContainerStarted","Data":"a63508984eca401169937526e3c646ea6e7580af5fbdfb4d9d76757ebd810dd0"} Mar 13 13:15:07.442184 master-0 kubenswrapper[28149]: I0313 13:15:07.441351 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-x9gz7"] Mar 13 13:15:07.480164 master-0 kubenswrapper[28149]: I0313 13:15:07.476715 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-k24lw" podStartSLOduration=8.476689257 podStartE2EDuration="8.476689257s" podCreationTimestamp="2026-03-13 13:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:15:07.460174342 +0000 UTC m=+1281.113639521" watchObservedRunningTime="2026-03-13 13:15:07.476689257 +0000 UTC m=+1281.130154416" Mar 13 13:15:07.536106 master-0 kubenswrapper[28149]: I0313 13:15:07.536035 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9c9ccb7c-grqhr"] Mar 13 13:15:07.536392 master-0 kubenswrapper[28149]: I0313 13:15:07.536339 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" podUID="98ef9c97-a395-412b-b84e-4bdfc2be1e17" containerName="dnsmasq-dns" containerID="cri-o://9a061918dc7dfbdb83b8b4351db3d92bb87549bb4234ae9d6238db27016b09e5" gracePeriod=10 Mar 13 13:15:07.549325 master-0 kubenswrapper[28149]: I0313 13:15:07.548402 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-conductor-0" podUID="8fdaa161-cf3d-465a-8e70-c2af73f96711" containerName="ironic-conductor" probeResult="failure" output="command timed out" Mar 13 13:15:07.976786 master-0 kubenswrapper[28149]: I0313 13:15:07.973124 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:15:07.976786 master-0 kubenswrapper[28149]: I0313 13:15:07.973414 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2558d1e1-d076-4b79-8591-d1e7ec5beaec" containerName="nova-api-log" containerID="cri-o://f7cec101b047b2f8a727c0417081b0d8950c8dd051d0751f87497369cea16285" gracePeriod=30 Mar 13 13:15:07.976786 master-0 kubenswrapper[28149]: I0313 13:15:07.974076 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2558d1e1-d076-4b79-8591-d1e7ec5beaec" containerName="nova-api-api" containerID="cri-o://1f2c978d66921698e61f0efc804ddcefb2484929e4de37e25b47bfa97b66007a" gracePeriod=30 Mar 13 13:15:08.445674 master-0 kubenswrapper[28149]: I0313 13:15:08.439283 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:08.445674 master-0 kubenswrapper[28149]: I0313 13:15:08.439675 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:08.446537 master-0 kubenswrapper[28149]: I0313 13:15:08.445957 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k24lw" event={"ID":"7226c8ae-d652-4f12-a915-9f15c50d5631","Type":"ContainerStarted","Data":"dc085ec39467ea5c61eb8978b66f220552e3206d8879580bc67558a7fcac4072"} Mar 13 13:15:08.466907 master-0 kubenswrapper[28149]: I0313 13:15:08.450388 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-x9gz7" event={"ID":"305404ca-1b24-429d-853c-ec1a49b101f0","Type":"ContainerStarted","Data":"9fb90c9f33c891e658af57f936c3a93210988f1e89fe1e54a46c9d8f2826f2c2"} Mar 13 13:15:08.549566 master-0 kubenswrapper[28149]: I0313 13:15:08.549498 28149 generic.go:334] "Generic (PLEG): container finished" podID="98ef9c97-a395-412b-b84e-4bdfc2be1e17" containerID="9a061918dc7dfbdb83b8b4351db3d92bb87549bb4234ae9d6238db27016b09e5" exitCode=0 Mar 13 13:15:08.549805 master-0 kubenswrapper[28149]: I0313 13:15:08.549569 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" event={"ID":"98ef9c97-a395-412b-b84e-4bdfc2be1e17","Type":"ContainerDied","Data":"9a061918dc7dfbdb83b8b4351db3d92bb87549bb4234ae9d6238db27016b09e5"} Mar 13 13:15:08.896449 master-0 kubenswrapper[28149]: I0313 13:15:08.896397 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:15:08.932385 master-0 kubenswrapper[28149]: I0313 13:15:08.932298 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-ovsdbserver-nb\") pod \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " Mar 13 13:15:08.932385 master-0 kubenswrapper[28149]: I0313 13:15:08.932371 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-dns-swift-storage-0\") pod \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " Mar 13 13:15:08.932708 master-0 kubenswrapper[28149]: I0313 13:15:08.932492 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-ovsdbserver-sb\") pod \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " Mar 13 13:15:08.932708 master-0 kubenswrapper[28149]: I0313 13:15:08.932539 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwlmg\" (UniqueName: \"kubernetes.io/projected/98ef9c97-a395-412b-b84e-4bdfc2be1e17-kube-api-access-jwlmg\") pod \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " Mar 13 13:15:08.934253 master-0 kubenswrapper[28149]: I0313 13:15:08.932840 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-config\") pod \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " Mar 13 13:15:08.934253 master-0 kubenswrapper[28149]: I0313 13:15:08.932878 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-dns-svc\") pod \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\" (UID: \"98ef9c97-a395-412b-b84e-4bdfc2be1e17\") " Mar 13 13:15:08.940675 master-0 kubenswrapper[28149]: I0313 13:15:08.940575 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98ef9c97-a395-412b-b84e-4bdfc2be1e17-kube-api-access-jwlmg" (OuterVolumeSpecName: "kube-api-access-jwlmg") pod "98ef9c97-a395-412b-b84e-4bdfc2be1e17" (UID: "98ef9c97-a395-412b-b84e-4bdfc2be1e17"). InnerVolumeSpecName "kube-api-access-jwlmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:15:09.024162 master-0 kubenswrapper[28149]: I0313 13:15:09.024104 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "98ef9c97-a395-412b-b84e-4bdfc2be1e17" (UID: "98ef9c97-a395-412b-b84e-4bdfc2be1e17"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:15:09.036036 master-0 kubenswrapper[28149]: I0313 13:15:09.035982 28149 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:09.036036 master-0 kubenswrapper[28149]: I0313 13:15:09.036020 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwlmg\" (UniqueName: \"kubernetes.io/projected/98ef9c97-a395-412b-b84e-4bdfc2be1e17-kube-api-access-jwlmg\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:09.064645 master-0 kubenswrapper[28149]: I0313 13:15:09.046702 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "98ef9c97-a395-412b-b84e-4bdfc2be1e17" (UID: "98ef9c97-a395-412b-b84e-4bdfc2be1e17"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:15:09.070260 master-0 kubenswrapper[28149]: I0313 13:15:09.066813 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "98ef9c97-a395-412b-b84e-4bdfc2be1e17" (UID: "98ef9c97-a395-412b-b84e-4bdfc2be1e17"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:15:09.086426 master-0 kubenswrapper[28149]: I0313 13:15:09.086331 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "98ef9c97-a395-412b-b84e-4bdfc2be1e17" (UID: "98ef9c97-a395-412b-b84e-4bdfc2be1e17"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:15:09.103825 master-0 kubenswrapper[28149]: I0313 13:15:09.103654 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-config" (OuterVolumeSpecName: "config") pod "98ef9c97-a395-412b-b84e-4bdfc2be1e17" (UID: "98ef9c97-a395-412b-b84e-4bdfc2be1e17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:15:09.138941 master-0 kubenswrapper[28149]: I0313 13:15:09.138875 28149 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:09.138941 master-0 kubenswrapper[28149]: I0313 13:15:09.138923 28149 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:09.138941 master-0 kubenswrapper[28149]: I0313 13:15:09.138936 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:09.138941 master-0 kubenswrapper[28149]: I0313 13:15:09.138955 28149 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98ef9c97-a395-412b-b84e-4bdfc2be1e17-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:09.564724 master-0 kubenswrapper[28149]: I0313 13:15:09.564663 28149 generic.go:334] "Generic (PLEG): container finished" podID="2558d1e1-d076-4b79-8591-d1e7ec5beaec" containerID="f7cec101b047b2f8a727c0417081b0d8950c8dd051d0751f87497369cea16285" exitCode=143 Mar 13 13:15:09.565303 master-0 kubenswrapper[28149]: I0313 13:15:09.564748 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2558d1e1-d076-4b79-8591-d1e7ec5beaec","Type":"ContainerDied","Data":"f7cec101b047b2f8a727c0417081b0d8950c8dd051d0751f87497369cea16285"} Mar 13 13:15:09.566788 master-0 kubenswrapper[28149]: I0313 13:15:09.566713 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-x9gz7" event={"ID":"305404ca-1b24-429d-853c-ec1a49b101f0","Type":"ContainerStarted","Data":"0c2dfa30bdd965c95bda1eb4a92294f4be01e56d9d544492193a48b4fcb00920"} Mar 13 13:15:09.571261 master-0 kubenswrapper[28149]: I0313 13:15:09.569938 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" Mar 13 13:15:09.572290 master-0 kubenswrapper[28149]: I0313 13:15:09.572234 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9c9ccb7c-grqhr" event={"ID":"98ef9c97-a395-412b-b84e-4bdfc2be1e17","Type":"ContainerDied","Data":"a6466c506ec304bbb680cab72507a2687c23c638cc257da49e87f963648fe84d"} Mar 13 13:15:09.572379 master-0 kubenswrapper[28149]: I0313 13:15:09.572348 28149 scope.go:117] "RemoveContainer" containerID="9a061918dc7dfbdb83b8b4351db3d92bb87549bb4234ae9d6238db27016b09e5" Mar 13 13:15:09.603076 master-0 kubenswrapper[28149]: I0313 13:15:09.602719 28149 scope.go:117] "RemoveContainer" containerID="3d146a842891051e150bef491ce987f74e4186fb94ca79c8e874679d4a0eac0a" Mar 13 13:15:09.605880 master-0 kubenswrapper[28149]: I0313 13:15:09.605585 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-x9gz7" podStartSLOduration=10.60556135 podStartE2EDuration="10.60556135s" podCreationTimestamp="2026-03-13 13:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:15:09.595177191 +0000 UTC m=+1283.248642360" watchObservedRunningTime="2026-03-13 13:15:09.60556135 +0000 UTC m=+1283.259026509" Mar 13 13:15:09.644757 master-0 kubenswrapper[28149]: I0313 13:15:09.644691 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9c9ccb7c-grqhr"] Mar 13 13:15:09.661325 master-0 kubenswrapper[28149]: I0313 13:15:09.661264 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9c9ccb7c-grqhr"] Mar 13 13:15:10.715209 master-0 kubenswrapper[28149]: I0313 13:15:10.707421 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98ef9c97-a395-412b-b84e-4bdfc2be1e17" path="/var/lib/kubelet/pods/98ef9c97-a395-412b-b84e-4bdfc2be1e17/volumes" Mar 13 13:15:12.616516 master-0 kubenswrapper[28149]: I0313 13:15:12.616439 28149 generic.go:334] "Generic (PLEG): container finished" podID="305404ca-1b24-429d-853c-ec1a49b101f0" containerID="0c2dfa30bdd965c95bda1eb4a92294f4be01e56d9d544492193a48b4fcb00920" exitCode=0 Mar 13 13:15:12.617521 master-0 kubenswrapper[28149]: I0313 13:15:12.616532 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-x9gz7" event={"ID":"305404ca-1b24-429d-853c-ec1a49b101f0","Type":"ContainerDied","Data":"0c2dfa30bdd965c95bda1eb4a92294f4be01e56d9d544492193a48b4fcb00920"} Mar 13 13:15:13.346340 master-0 kubenswrapper[28149]: I0313 13:15:13.346274 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 13:15:13.459221 master-0 kubenswrapper[28149]: I0313 13:15:13.458712 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2558d1e1-d076-4b79-8591-d1e7ec5beaec-config-data\") pod \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " Mar 13 13:15:13.459221 master-0 kubenswrapper[28149]: I0313 13:15:13.458902 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6xh2\" (UniqueName: \"kubernetes.io/projected/2558d1e1-d076-4b79-8591-d1e7ec5beaec-kube-api-access-d6xh2\") pod \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " Mar 13 13:15:13.459221 master-0 kubenswrapper[28149]: I0313 13:15:13.459049 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2558d1e1-d076-4b79-8591-d1e7ec5beaec-logs\") pod \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " Mar 13 13:15:13.459221 master-0 kubenswrapper[28149]: I0313 13:15:13.459121 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2558d1e1-d076-4b79-8591-d1e7ec5beaec-combined-ca-bundle\") pod \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\" (UID: \"2558d1e1-d076-4b79-8591-d1e7ec5beaec\") " Mar 13 13:15:13.460687 master-0 kubenswrapper[28149]: I0313 13:15:13.460624 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2558d1e1-d076-4b79-8591-d1e7ec5beaec-logs" (OuterVolumeSpecName: "logs") pod "2558d1e1-d076-4b79-8591-d1e7ec5beaec" (UID: "2558d1e1-d076-4b79-8591-d1e7ec5beaec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:15:13.472851 master-0 kubenswrapper[28149]: I0313 13:15:13.470414 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2558d1e1-d076-4b79-8591-d1e7ec5beaec-kube-api-access-d6xh2" (OuterVolumeSpecName: "kube-api-access-d6xh2") pod "2558d1e1-d076-4b79-8591-d1e7ec5beaec" (UID: "2558d1e1-d076-4b79-8591-d1e7ec5beaec"). InnerVolumeSpecName "kube-api-access-d6xh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:15:13.502296 master-0 kubenswrapper[28149]: I0313 13:15:13.500673 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2558d1e1-d076-4b79-8591-d1e7ec5beaec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2558d1e1-d076-4b79-8591-d1e7ec5beaec" (UID: "2558d1e1-d076-4b79-8591-d1e7ec5beaec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:13.507602 master-0 kubenswrapper[28149]: I0313 13:15:13.507507 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2558d1e1-d076-4b79-8591-d1e7ec5beaec-config-data" (OuterVolumeSpecName: "config-data") pod "2558d1e1-d076-4b79-8591-d1e7ec5beaec" (UID: "2558d1e1-d076-4b79-8591-d1e7ec5beaec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:13.562193 master-0 kubenswrapper[28149]: I0313 13:15:13.562126 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2558d1e1-d076-4b79-8591-d1e7ec5beaec-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:13.562193 master-0 kubenswrapper[28149]: I0313 13:15:13.562194 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6xh2\" (UniqueName: \"kubernetes.io/projected/2558d1e1-d076-4b79-8591-d1e7ec5beaec-kube-api-access-d6xh2\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:13.562430 master-0 kubenswrapper[28149]: I0313 13:15:13.562210 28149 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2558d1e1-d076-4b79-8591-d1e7ec5beaec-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:13.562430 master-0 kubenswrapper[28149]: I0313 13:15:13.562220 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2558d1e1-d076-4b79-8591-d1e7ec5beaec-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:13.630959 master-0 kubenswrapper[28149]: I0313 13:15:13.630898 28149 generic.go:334] "Generic (PLEG): container finished" podID="2558d1e1-d076-4b79-8591-d1e7ec5beaec" containerID="1f2c978d66921698e61f0efc804ddcefb2484929e4de37e25b47bfa97b66007a" exitCode=0 Mar 13 13:15:13.631493 master-0 kubenswrapper[28149]: I0313 13:15:13.631230 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 13:15:13.640864 master-0 kubenswrapper[28149]: I0313 13:15:13.635790 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2558d1e1-d076-4b79-8591-d1e7ec5beaec","Type":"ContainerDied","Data":"1f2c978d66921698e61f0efc804ddcefb2484929e4de37e25b47bfa97b66007a"} Mar 13 13:15:13.640864 master-0 kubenswrapper[28149]: I0313 13:15:13.636028 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2558d1e1-d076-4b79-8591-d1e7ec5beaec","Type":"ContainerDied","Data":"2631daf48b0300f9351943060982ca4c28e91094ad606fdcc87e3207db9980d9"} Mar 13 13:15:13.640864 master-0 kubenswrapper[28149]: I0313 13:15:13.636056 28149 scope.go:117] "RemoveContainer" containerID="1f2c978d66921698e61f0efc804ddcefb2484929e4de37e25b47bfa97b66007a" Mar 13 13:15:13.670626 master-0 kubenswrapper[28149]: I0313 13:15:13.670585 28149 scope.go:117] "RemoveContainer" containerID="f7cec101b047b2f8a727c0417081b0d8950c8dd051d0751f87497369cea16285" Mar 13 13:15:13.685850 master-0 kubenswrapper[28149]: I0313 13:15:13.685268 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:15:13.696586 master-0 kubenswrapper[28149]: I0313 13:15:13.696518 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:15:13.709543 master-0 kubenswrapper[28149]: I0313 13:15:13.708515 28149 scope.go:117] "RemoveContainer" containerID="1f2c978d66921698e61f0efc804ddcefb2484929e4de37e25b47bfa97b66007a" Mar 13 13:15:13.709543 master-0 kubenswrapper[28149]: E0313 13:15:13.709063 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f2c978d66921698e61f0efc804ddcefb2484929e4de37e25b47bfa97b66007a\": container with ID starting with 1f2c978d66921698e61f0efc804ddcefb2484929e4de37e25b47bfa97b66007a not found: ID does not exist" containerID="1f2c978d66921698e61f0efc804ddcefb2484929e4de37e25b47bfa97b66007a" Mar 13 13:15:13.709543 master-0 kubenswrapper[28149]: I0313 13:15:13.709103 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f2c978d66921698e61f0efc804ddcefb2484929e4de37e25b47bfa97b66007a"} err="failed to get container status \"1f2c978d66921698e61f0efc804ddcefb2484929e4de37e25b47bfa97b66007a\": rpc error: code = NotFound desc = could not find container \"1f2c978d66921698e61f0efc804ddcefb2484929e4de37e25b47bfa97b66007a\": container with ID starting with 1f2c978d66921698e61f0efc804ddcefb2484929e4de37e25b47bfa97b66007a not found: ID does not exist" Mar 13 13:15:13.709543 master-0 kubenswrapper[28149]: I0313 13:15:13.709124 28149 scope.go:117] "RemoveContainer" containerID="f7cec101b047b2f8a727c0417081b0d8950c8dd051d0751f87497369cea16285" Mar 13 13:15:13.712183 master-0 kubenswrapper[28149]: E0313 13:15:13.712129 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7cec101b047b2f8a727c0417081b0d8950c8dd051d0751f87497369cea16285\": container with ID starting with f7cec101b047b2f8a727c0417081b0d8950c8dd051d0751f87497369cea16285 not found: ID does not exist" containerID="f7cec101b047b2f8a727c0417081b0d8950c8dd051d0751f87497369cea16285" Mar 13 13:15:13.712338 master-0 kubenswrapper[28149]: I0313 13:15:13.712188 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7cec101b047b2f8a727c0417081b0d8950c8dd051d0751f87497369cea16285"} err="failed to get container status \"f7cec101b047b2f8a727c0417081b0d8950c8dd051d0751f87497369cea16285\": rpc error: code = NotFound desc = could not find container \"f7cec101b047b2f8a727c0417081b0d8950c8dd051d0751f87497369cea16285\": container with ID starting with f7cec101b047b2f8a727c0417081b0d8950c8dd051d0751f87497369cea16285 not found: ID does not exist" Mar 13 13:15:13.734622 master-0 kubenswrapper[28149]: I0313 13:15:13.731016 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 13:15:13.734622 master-0 kubenswrapper[28149]: E0313 13:15:13.731608 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ef9c97-a395-412b-b84e-4bdfc2be1e17" containerName="init" Mar 13 13:15:13.734622 master-0 kubenswrapper[28149]: I0313 13:15:13.731624 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ef9c97-a395-412b-b84e-4bdfc2be1e17" containerName="init" Mar 13 13:15:13.734622 master-0 kubenswrapper[28149]: E0313 13:15:13.731654 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2558d1e1-d076-4b79-8591-d1e7ec5beaec" containerName="nova-api-log" Mar 13 13:15:13.734622 master-0 kubenswrapper[28149]: I0313 13:15:13.731660 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="2558d1e1-d076-4b79-8591-d1e7ec5beaec" containerName="nova-api-log" Mar 13 13:15:13.734622 master-0 kubenswrapper[28149]: E0313 13:15:13.731683 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ef9c97-a395-412b-b84e-4bdfc2be1e17" containerName="dnsmasq-dns" Mar 13 13:15:13.734622 master-0 kubenswrapper[28149]: I0313 13:15:13.731729 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ef9c97-a395-412b-b84e-4bdfc2be1e17" containerName="dnsmasq-dns" Mar 13 13:15:13.734622 master-0 kubenswrapper[28149]: E0313 13:15:13.731745 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2558d1e1-d076-4b79-8591-d1e7ec5beaec" containerName="nova-api-api" Mar 13 13:15:13.734622 master-0 kubenswrapper[28149]: I0313 13:15:13.731751 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="2558d1e1-d076-4b79-8591-d1e7ec5beaec" containerName="nova-api-api" Mar 13 13:15:13.734622 master-0 kubenswrapper[28149]: I0313 13:15:13.732047 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="2558d1e1-d076-4b79-8591-d1e7ec5beaec" containerName="nova-api-api" Mar 13 13:15:13.734622 master-0 kubenswrapper[28149]: I0313 13:15:13.732069 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="98ef9c97-a395-412b-b84e-4bdfc2be1e17" containerName="dnsmasq-dns" Mar 13 13:15:13.734622 master-0 kubenswrapper[28149]: I0313 13:15:13.732094 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="2558d1e1-d076-4b79-8591-d1e7ec5beaec" containerName="nova-api-log" Mar 13 13:15:13.735520 master-0 kubenswrapper[28149]: I0313 13:15:13.734741 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 13:15:13.739134 master-0 kubenswrapper[28149]: I0313 13:15:13.738851 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 13:15:13.740258 master-0 kubenswrapper[28149]: I0313 13:15:13.740219 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 13 13:15:13.740483 master-0 kubenswrapper[28149]: I0313 13:15:13.740435 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 13 13:15:13.748324 master-0 kubenswrapper[28149]: I0313 13:15:13.746875 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:15:13.781862 master-0 kubenswrapper[28149]: I0313 13:15:13.781803 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.782084 master-0 kubenswrapper[28149]: I0313 13:15:13.781970 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-public-tls-certs\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.782084 master-0 kubenswrapper[28149]: I0313 13:15:13.782003 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-logs\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.782232 master-0 kubenswrapper[28149]: I0313 13:15:13.782126 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-config-data\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.782232 master-0 kubenswrapper[28149]: I0313 13:15:13.782185 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.782319 master-0 kubenswrapper[28149]: I0313 13:15:13.782248 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhw8r\" (UniqueName: \"kubernetes.io/projected/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-kube-api-access-jhw8r\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.884739 master-0 kubenswrapper[28149]: I0313 13:15:13.884683 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.884739 master-0 kubenswrapper[28149]: I0313 13:15:13.884754 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-public-tls-certs\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.886369 master-0 kubenswrapper[28149]: I0313 13:15:13.884777 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-logs\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.886369 master-0 kubenswrapper[28149]: I0313 13:15:13.884831 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-config-data\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.886369 master-0 kubenswrapper[28149]: I0313 13:15:13.884853 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.886369 master-0 kubenswrapper[28149]: I0313 13:15:13.884883 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhw8r\" (UniqueName: \"kubernetes.io/projected/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-kube-api-access-jhw8r\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.887772 master-0 kubenswrapper[28149]: I0313 13:15:13.887727 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-logs\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.890224 master-0 kubenswrapper[28149]: I0313 13:15:13.890118 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.892102 master-0 kubenswrapper[28149]: I0313 13:15:13.892075 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-config-data\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.902967 master-0 kubenswrapper[28149]: I0313 13:15:13.902884 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-public-tls-certs\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.903241 master-0 kubenswrapper[28149]: I0313 13:15:13.903015 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:13.904853 master-0 kubenswrapper[28149]: I0313 13:15:13.904814 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhw8r\" (UniqueName: \"kubernetes.io/projected/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-kube-api-access-jhw8r\") pod \"nova-api-0\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " pod="openstack/nova-api-0" Mar 13 13:15:14.099237 master-0 kubenswrapper[28149]: I0313 13:15:14.099180 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 13:15:14.109423 master-0 kubenswrapper[28149]: I0313 13:15:14.109377 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:14.192264 master-0 kubenswrapper[28149]: I0313 13:15:14.192197 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-config-data\") pod \"305404ca-1b24-429d-853c-ec1a49b101f0\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " Mar 13 13:15:14.192553 master-0 kubenswrapper[28149]: I0313 13:15:14.192427 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-combined-ca-bundle\") pod \"305404ca-1b24-429d-853c-ec1a49b101f0\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " Mar 13 13:15:14.192911 master-0 kubenswrapper[28149]: I0313 13:15:14.192635 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2c29\" (UniqueName: \"kubernetes.io/projected/305404ca-1b24-429d-853c-ec1a49b101f0-kube-api-access-w2c29\") pod \"305404ca-1b24-429d-853c-ec1a49b101f0\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " Mar 13 13:15:14.192911 master-0 kubenswrapper[28149]: I0313 13:15:14.192695 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-scripts\") pod \"305404ca-1b24-429d-853c-ec1a49b101f0\" (UID: \"305404ca-1b24-429d-853c-ec1a49b101f0\") " Mar 13 13:15:14.197452 master-0 kubenswrapper[28149]: I0313 13:15:14.197391 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-scripts" (OuterVolumeSpecName: "scripts") pod "305404ca-1b24-429d-853c-ec1a49b101f0" (UID: "305404ca-1b24-429d-853c-ec1a49b101f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:14.201464 master-0 kubenswrapper[28149]: I0313 13:15:14.201411 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305404ca-1b24-429d-853c-ec1a49b101f0-kube-api-access-w2c29" (OuterVolumeSpecName: "kube-api-access-w2c29") pod "305404ca-1b24-429d-853c-ec1a49b101f0" (UID: "305404ca-1b24-429d-853c-ec1a49b101f0"). InnerVolumeSpecName "kube-api-access-w2c29". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:15:14.222683 master-0 kubenswrapper[28149]: I0313 13:15:14.222633 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-config-data" (OuterVolumeSpecName: "config-data") pod "305404ca-1b24-429d-853c-ec1a49b101f0" (UID: "305404ca-1b24-429d-853c-ec1a49b101f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:14.230467 master-0 kubenswrapper[28149]: I0313 13:15:14.230401 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "305404ca-1b24-429d-853c-ec1a49b101f0" (UID: "305404ca-1b24-429d-853c-ec1a49b101f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:14.299313 master-0 kubenswrapper[28149]: I0313 13:15:14.299245 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:14.299533 master-0 kubenswrapper[28149]: I0313 13:15:14.299313 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2c29\" (UniqueName: \"kubernetes.io/projected/305404ca-1b24-429d-853c-ec1a49b101f0-kube-api-access-w2c29\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:14.299533 master-0 kubenswrapper[28149]: I0313 13:15:14.299366 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:14.299533 master-0 kubenswrapper[28149]: I0313 13:15:14.299379 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305404ca-1b24-429d-853c-ec1a49b101f0-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:14.595068 master-0 kubenswrapper[28149]: W0313 13:15:14.594972 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e9230e3_1138_4bd9_9054_8ed3044f1f6c.slice/crio-b1b7ffc850e9b90c8f3a32a3460057ada1f8ccbd1ffa841f8fe8ffec8239d73c WatchSource:0}: Error finding container b1b7ffc850e9b90c8f3a32a3460057ada1f8ccbd1ffa841f8fe8ffec8239d73c: Status 404 returned error can't find the container with id b1b7ffc850e9b90c8f3a32a3460057ada1f8ccbd1ffa841f8fe8ffec8239d73c Mar 13 13:15:14.601720 master-0 kubenswrapper[28149]: I0313 13:15:14.601663 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:15:14.679251 master-0 kubenswrapper[28149]: I0313 13:15:14.679021 28149 generic.go:334] "Generic (PLEG): container finished" podID="7226c8ae-d652-4f12-a915-9f15c50d5631" containerID="dc085ec39467ea5c61eb8978b66f220552e3206d8879580bc67558a7fcac4072" exitCode=0 Mar 13 13:15:14.679251 master-0 kubenswrapper[28149]: I0313 13:15:14.679108 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k24lw" event={"ID":"7226c8ae-d652-4f12-a915-9f15c50d5631","Type":"ContainerDied","Data":"dc085ec39467ea5c61eb8978b66f220552e3206d8879580bc67558a7fcac4072"} Mar 13 13:15:14.680629 master-0 kubenswrapper[28149]: I0313 13:15:14.680514 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e9230e3-1138-4bd9-9054-8ed3044f1f6c","Type":"ContainerStarted","Data":"b1b7ffc850e9b90c8f3a32a3460057ada1f8ccbd1ffa841f8fe8ffec8239d73c"} Mar 13 13:15:14.682193 master-0 kubenswrapper[28149]: I0313 13:15:14.682065 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-x9gz7" event={"ID":"305404ca-1b24-429d-853c-ec1a49b101f0","Type":"ContainerDied","Data":"9fb90c9f33c891e658af57f936c3a93210988f1e89fe1e54a46c9d8f2826f2c2"} Mar 13 13:15:14.682193 master-0 kubenswrapper[28149]: I0313 13:15:14.682118 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fb90c9f33c891e658af57f936c3a93210988f1e89fe1e54a46c9d8f2826f2c2" Mar 13 13:15:14.682307 master-0 kubenswrapper[28149]: I0313 13:15:14.682203 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-x9gz7" Mar 13 13:15:14.719840 master-0 kubenswrapper[28149]: I0313 13:15:14.719523 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2558d1e1-d076-4b79-8591-d1e7ec5beaec" path="/var/lib/kubelet/pods/2558d1e1-d076-4b79-8591-d1e7ec5beaec/volumes" Mar 13 13:15:16.072105 master-0 kubenswrapper[28149]: I0313 13:15:15.704038 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e9230e3-1138-4bd9-9054-8ed3044f1f6c","Type":"ContainerStarted","Data":"5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc"} Mar 13 13:15:16.072105 master-0 kubenswrapper[28149]: I0313 13:15:15.704085 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e9230e3-1138-4bd9-9054-8ed3044f1f6c","Type":"ContainerStarted","Data":"04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb"} Mar 13 13:15:16.351124 master-0 kubenswrapper[28149]: I0313 13:15:16.350890 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.350860622 podStartE2EDuration="3.350860622s" podCreationTimestamp="2026-03-13 13:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:15:16.340946585 +0000 UTC m=+1289.994411744" watchObservedRunningTime="2026-03-13 13:15:16.350860622 +0000 UTC m=+1290.004325791" Mar 13 13:15:16.393999 master-0 kubenswrapper[28149]: I0313 13:15:16.393949 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:15:16.488752 master-0 kubenswrapper[28149]: I0313 13:15:16.488690 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-config-data\") pod \"7226c8ae-d652-4f12-a915-9f15c50d5631\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " Mar 13 13:15:16.488990 master-0 kubenswrapper[28149]: I0313 13:15:16.488869 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-scripts\") pod \"7226c8ae-d652-4f12-a915-9f15c50d5631\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " Mar 13 13:15:16.488990 master-0 kubenswrapper[28149]: I0313 13:15:16.488908 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-combined-ca-bundle\") pod \"7226c8ae-d652-4f12-a915-9f15c50d5631\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " Mar 13 13:15:16.525262 master-0 kubenswrapper[28149]: I0313 13:15:16.524942 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-scripts" (OuterVolumeSpecName: "scripts") pod "7226c8ae-d652-4f12-a915-9f15c50d5631" (UID: "7226c8ae-d652-4f12-a915-9f15c50d5631"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:16.527610 master-0 kubenswrapper[28149]: I0313 13:15:16.527567 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-config-data" (OuterVolumeSpecName: "config-data") pod "7226c8ae-d652-4f12-a915-9f15c50d5631" (UID: "7226c8ae-d652-4f12-a915-9f15c50d5631"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:16.529218 master-0 kubenswrapper[28149]: I0313 13:15:16.529177 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7226c8ae-d652-4f12-a915-9f15c50d5631" (UID: "7226c8ae-d652-4f12-a915-9f15c50d5631"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:16.668382 master-0 kubenswrapper[28149]: I0313 13:15:16.668069 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlhhh\" (UniqueName: \"kubernetes.io/projected/7226c8ae-d652-4f12-a915-9f15c50d5631-kube-api-access-dlhhh\") pod \"7226c8ae-d652-4f12-a915-9f15c50d5631\" (UID: \"7226c8ae-d652-4f12-a915-9f15c50d5631\") " Mar 13 13:15:16.670159 master-0 kubenswrapper[28149]: I0313 13:15:16.670074 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:16.670159 master-0 kubenswrapper[28149]: I0313 13:15:16.670114 28149 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:16.670159 master-0 kubenswrapper[28149]: I0313 13:15:16.670125 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7226c8ae-d652-4f12-a915-9f15c50d5631-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:16.672653 master-0 kubenswrapper[28149]: I0313 13:15:16.672608 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7226c8ae-d652-4f12-a915-9f15c50d5631-kube-api-access-dlhhh" (OuterVolumeSpecName: "kube-api-access-dlhhh") pod "7226c8ae-d652-4f12-a915-9f15c50d5631" (UID: "7226c8ae-d652-4f12-a915-9f15c50d5631"). InnerVolumeSpecName "kube-api-access-dlhhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:15:16.722285 master-0 kubenswrapper[28149]: I0313 13:15:16.722226 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k24lw" Mar 13 13:15:16.723564 master-0 kubenswrapper[28149]: I0313 13:15:16.723499 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k24lw" event={"ID":"7226c8ae-d652-4f12-a915-9f15c50d5631","Type":"ContainerDied","Data":"a63508984eca401169937526e3c646ea6e7580af5fbdfb4d9d76757ebd810dd0"} Mar 13 13:15:16.723635 master-0 kubenswrapper[28149]: I0313 13:15:16.723570 28149 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a63508984eca401169937526e3c646ea6e7580af5fbdfb4d9d76757ebd810dd0" Mar 13 13:15:16.773167 master-0 kubenswrapper[28149]: I0313 13:15:16.773046 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlhhh\" (UniqueName: \"kubernetes.io/projected/7226c8ae-d652-4f12-a915-9f15c50d5631-kube-api-access-dlhhh\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:16.958008 master-0 kubenswrapper[28149]: E0313 13:15:16.957850 28149 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7226c8ae_d652_4f12_a915_9f15c50d5631.slice/crio-a63508984eca401169937526e3c646ea6e7580af5fbdfb4d9d76757ebd810dd0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7226c8ae_d652_4f12_a915_9f15c50d5631.slice\": RecentStats: unable to find data in memory cache]" Mar 13 13:15:17.451174 master-0 kubenswrapper[28149]: I0313 13:15:17.449688 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 13:15:17.454515 master-0 kubenswrapper[28149]: I0313 13:15:17.454456 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 13:15:17.454726 master-0 kubenswrapper[28149]: I0313 13:15:17.454684 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8751eeff-49e7-416e-8f8e-037bc9e956e6" containerName="nova-scheduler-scheduler" containerID="cri-o://cc2bac34fe820d08bbd47cdca973999db27c61906cfb7d5b845736f73690704d" gracePeriod=30 Mar 13 13:15:17.459167 master-0 kubenswrapper[28149]: I0313 13:15:17.458838 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 13:15:17.464372 master-0 kubenswrapper[28149]: I0313 13:15:17.463966 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 13:15:17.485641 master-0 kubenswrapper[28149]: I0313 13:15:17.485536 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:15:17.514829 master-0 kubenswrapper[28149]: I0313 13:15:17.514757 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 13:15:17.736597 master-0 kubenswrapper[28149]: I0313 13:15:17.736452 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7e9230e3-1138-4bd9-9054-8ed3044f1f6c" containerName="nova-api-log" containerID="cri-o://04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb" gracePeriod=30 Mar 13 13:15:17.736805 master-0 kubenswrapper[28149]: I0313 13:15:17.736663 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7e9230e3-1138-4bd9-9054-8ed3044f1f6c" containerName="nova-api-api" containerID="cri-o://5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc" gracePeriod=30 Mar 13 13:15:17.890068 master-0 kubenswrapper[28149]: I0313 13:15:17.890020 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 13:15:18.409972 master-0 kubenswrapper[28149]: E0313 13:15:18.409859 28149 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cc2bac34fe820d08bbd47cdca973999db27c61906cfb7d5b845736f73690704d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 13:15:18.425189 master-0 kubenswrapper[28149]: E0313 13:15:18.423886 28149 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cc2bac34fe820d08bbd47cdca973999db27c61906cfb7d5b845736f73690704d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 13:15:18.426537 master-0 kubenswrapper[28149]: E0313 13:15:18.426387 28149 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cc2bac34fe820d08bbd47cdca973999db27c61906cfb7d5b845736f73690704d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 13:15:18.426632 master-0 kubenswrapper[28149]: E0313 13:15:18.426545 28149 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8751eeff-49e7-416e-8f8e-037bc9e956e6" containerName="nova-scheduler-scheduler" Mar 13 13:15:18.626893 master-0 kubenswrapper[28149]: I0313 13:15:18.626853 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 13:15:18.752461 master-0 kubenswrapper[28149]: I0313 13:15:18.752394 28149 generic.go:334] "Generic (PLEG): container finished" podID="7e9230e3-1138-4bd9-9054-8ed3044f1f6c" containerID="5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc" exitCode=0 Mar 13 13:15:18.752461 master-0 kubenswrapper[28149]: I0313 13:15:18.752445 28149 generic.go:334] "Generic (PLEG): container finished" podID="7e9230e3-1138-4bd9-9054-8ed3044f1f6c" containerID="04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb" exitCode=143 Mar 13 13:15:18.752745 master-0 kubenswrapper[28149]: I0313 13:15:18.752552 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 13:15:18.760134 master-0 kubenswrapper[28149]: I0313 13:15:18.760051 28149 generic.go:334] "Generic (PLEG): container finished" podID="8751eeff-49e7-416e-8f8e-037bc9e956e6" containerID="cc2bac34fe820d08bbd47cdca973999db27c61906cfb7d5b845736f73690704d" exitCode=0 Mar 13 13:15:18.761003 master-0 kubenswrapper[28149]: I0313 13:15:18.760422 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerName="nova-metadata-log" containerID="cri-o://e9c29f4ebf8c74e4fc60a77a1033351cbf9270544bef85bc6feb7f5be1c245b6" gracePeriod=30 Mar 13 13:15:18.761003 master-0 kubenswrapper[28149]: I0313 13:15:18.760551 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerName="nova-metadata-metadata" containerID="cri-o://568dd59e04dd1335822f79eed749066323b9ec02f7b4c6056b3dbd19d0faddd8" gracePeriod=30 Mar 13 13:15:18.768429 master-0 kubenswrapper[28149]: I0313 13:15:18.768373 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e9230e3-1138-4bd9-9054-8ed3044f1f6c","Type":"ContainerDied","Data":"5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc"} Mar 13 13:15:18.768649 master-0 kubenswrapper[28149]: I0313 13:15:18.768439 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e9230e3-1138-4bd9-9054-8ed3044f1f6c","Type":"ContainerDied","Data":"04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb"} Mar 13 13:15:18.768649 master-0 kubenswrapper[28149]: I0313 13:15:18.768469 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e9230e3-1138-4bd9-9054-8ed3044f1f6c","Type":"ContainerDied","Data":"b1b7ffc850e9b90c8f3a32a3460057ada1f8ccbd1ffa841f8fe8ffec8239d73c"} Mar 13 13:15:18.768649 master-0 kubenswrapper[28149]: I0313 13:15:18.768485 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8751eeff-49e7-416e-8f8e-037bc9e956e6","Type":"ContainerDied","Data":"cc2bac34fe820d08bbd47cdca973999db27c61906cfb7d5b845736f73690704d"} Mar 13 13:15:18.768649 master-0 kubenswrapper[28149]: I0313 13:15:18.768490 28149 scope.go:117] "RemoveContainer" containerID="5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc" Mar 13 13:15:18.777337 master-0 kubenswrapper[28149]: I0313 13:15:18.776832 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhw8r\" (UniqueName: \"kubernetes.io/projected/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-kube-api-access-jhw8r\") pod \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " Mar 13 13:15:18.777337 master-0 kubenswrapper[28149]: I0313 13:15:18.776900 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-config-data\") pod \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " Mar 13 13:15:18.777337 master-0 kubenswrapper[28149]: I0313 13:15:18.777062 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-combined-ca-bundle\") pod \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " Mar 13 13:15:18.777337 master-0 kubenswrapper[28149]: I0313 13:15:18.777299 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-internal-tls-certs\") pod \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " Mar 13 13:15:18.777738 master-0 kubenswrapper[28149]: I0313 13:15:18.777351 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-logs\") pod \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " Mar 13 13:15:18.777738 master-0 kubenswrapper[28149]: I0313 13:15:18.777510 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-public-tls-certs\") pod \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\" (UID: \"7e9230e3-1138-4bd9-9054-8ed3044f1f6c\") " Mar 13 13:15:18.779664 master-0 kubenswrapper[28149]: I0313 13:15:18.779615 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-logs" (OuterVolumeSpecName: "logs") pod "7e9230e3-1138-4bd9-9054-8ed3044f1f6c" (UID: "7e9230e3-1138-4bd9-9054-8ed3044f1f6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:15:18.793134 master-0 kubenswrapper[28149]: I0313 13:15:18.790570 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-kube-api-access-jhw8r" (OuterVolumeSpecName: "kube-api-access-jhw8r") pod "7e9230e3-1138-4bd9-9054-8ed3044f1f6c" (UID: "7e9230e3-1138-4bd9-9054-8ed3044f1f6c"). InnerVolumeSpecName "kube-api-access-jhw8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:15:18.810694 master-0 kubenswrapper[28149]: I0313 13:15:18.810643 28149 scope.go:117] "RemoveContainer" containerID="04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb" Mar 13 13:15:18.814032 master-0 kubenswrapper[28149]: I0313 13:15:18.813919 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e9230e3-1138-4bd9-9054-8ed3044f1f6c" (UID: "7e9230e3-1138-4bd9-9054-8ed3044f1f6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:18.825308 master-0 kubenswrapper[28149]: I0313 13:15:18.825245 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-config-data" (OuterVolumeSpecName: "config-data") pod "7e9230e3-1138-4bd9-9054-8ed3044f1f6c" (UID: "7e9230e3-1138-4bd9-9054-8ed3044f1f6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:18.838707 master-0 kubenswrapper[28149]: I0313 13:15:18.838635 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7e9230e3-1138-4bd9-9054-8ed3044f1f6c" (UID: "7e9230e3-1138-4bd9-9054-8ed3044f1f6c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:18.848945 master-0 kubenswrapper[28149]: I0313 13:15:18.848898 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7e9230e3-1138-4bd9-9054-8ed3044f1f6c" (UID: "7e9230e3-1138-4bd9-9054-8ed3044f1f6c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:18.881581 master-0 kubenswrapper[28149]: I0313 13:15:18.881530 28149 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:18.881581 master-0 kubenswrapper[28149]: I0313 13:15:18.881575 28149 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:18.881581 master-0 kubenswrapper[28149]: I0313 13:15:18.881587 28149 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:18.881867 master-0 kubenswrapper[28149]: I0313 13:15:18.881602 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhw8r\" (UniqueName: \"kubernetes.io/projected/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-kube-api-access-jhw8r\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:18.881867 master-0 kubenswrapper[28149]: I0313 13:15:18.881618 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:18.881867 master-0 kubenswrapper[28149]: I0313 13:15:18.881629 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9230e3-1138-4bd9-9054-8ed3044f1f6c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:18.903763 master-0 kubenswrapper[28149]: I0313 13:15:18.903720 28149 scope.go:117] "RemoveContainer" containerID="5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc" Mar 13 13:15:18.904970 master-0 kubenswrapper[28149]: E0313 13:15:18.904915 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc\": container with ID starting with 5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc not found: ID does not exist" containerID="5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc" Mar 13 13:15:18.905051 master-0 kubenswrapper[28149]: I0313 13:15:18.904986 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc"} err="failed to get container status \"5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc\": rpc error: code = NotFound desc = could not find container \"5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc\": container with ID starting with 5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc not found: ID does not exist" Mar 13 13:15:18.905051 master-0 kubenswrapper[28149]: I0313 13:15:18.905029 28149 scope.go:117] "RemoveContainer" containerID="04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb" Mar 13 13:15:18.905424 master-0 kubenswrapper[28149]: E0313 13:15:18.905367 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb\": container with ID starting with 04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb not found: ID does not exist" containerID="04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb" Mar 13 13:15:18.905486 master-0 kubenswrapper[28149]: I0313 13:15:18.905412 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb"} err="failed to get container status \"04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb\": rpc error: code = NotFound desc = could not find container \"04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb\": container with ID starting with 04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb not found: ID does not exist" Mar 13 13:15:18.905486 master-0 kubenswrapper[28149]: I0313 13:15:18.905436 28149 scope.go:117] "RemoveContainer" containerID="5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc" Mar 13 13:15:18.905682 master-0 kubenswrapper[28149]: I0313 13:15:18.905643 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc"} err="failed to get container status \"5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc\": rpc error: code = NotFound desc = could not find container \"5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc\": container with ID starting with 5c9e96888aff3232f4f87329dd45850fca3c0a7b2734caf97d15a12d276383bc not found: ID does not exist" Mar 13 13:15:18.905682 master-0 kubenswrapper[28149]: I0313 13:15:18.905672 28149 scope.go:117] "RemoveContainer" containerID="04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb" Mar 13 13:15:18.906052 master-0 kubenswrapper[28149]: I0313 13:15:18.906029 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb"} err="failed to get container status \"04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb\": rpc error: code = NotFound desc = could not find container \"04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb\": container with ID starting with 04f2570e3f9f2acd9e59a9753cfca60a227a9cc64ab943faa82b1cf744f459cb not found: ID does not exist" Mar 13 13:15:19.191193 master-0 kubenswrapper[28149]: I0313 13:15:19.190215 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 13:15:19.195686 master-0 kubenswrapper[28149]: I0313 13:15:19.195626 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:15:19.231866 master-0 kubenswrapper[28149]: I0313 13:15:19.231796 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: I0313 13:15:19.277130 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: E0313 13:15:19.277831 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9230e3-1138-4bd9-9054-8ed3044f1f6c" containerName="nova-api-api" Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: I0313 13:15:19.277852 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9230e3-1138-4bd9-9054-8ed3044f1f6c" containerName="nova-api-api" Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: E0313 13:15:19.277877 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305404ca-1b24-429d-853c-ec1a49b101f0" containerName="nova-manage" Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: I0313 13:15:19.277883 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="305404ca-1b24-429d-853c-ec1a49b101f0" containerName="nova-manage" Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: E0313 13:15:19.277902 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9230e3-1138-4bd9-9054-8ed3044f1f6c" containerName="nova-api-log" Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: I0313 13:15:19.277908 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9230e3-1138-4bd9-9054-8ed3044f1f6c" containerName="nova-api-log" Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: E0313 13:15:19.277934 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7226c8ae-d652-4f12-a915-9f15c50d5631" containerName="nova-manage" Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: I0313 13:15:19.277942 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="7226c8ae-d652-4f12-a915-9f15c50d5631" containerName="nova-manage" Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: E0313 13:15:19.277957 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8751eeff-49e7-416e-8f8e-037bc9e956e6" containerName="nova-scheduler-scheduler" Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: I0313 13:15:19.277963 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="8751eeff-49e7-416e-8f8e-037bc9e956e6" containerName="nova-scheduler-scheduler" Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: I0313 13:15:19.278242 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e9230e3-1138-4bd9-9054-8ed3044f1f6c" containerName="nova-api-api" Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: I0313 13:15:19.278269 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="305404ca-1b24-429d-853c-ec1a49b101f0" containerName="nova-manage" Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: I0313 13:15:19.278281 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e9230e3-1138-4bd9-9054-8ed3044f1f6c" containerName="nova-api-log" Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: I0313 13:15:19.278316 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="8751eeff-49e7-416e-8f8e-037bc9e956e6" containerName="nova-scheduler-scheduler" Mar 13 13:15:19.279847 master-0 kubenswrapper[28149]: I0313 13:15:19.278350 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="7226c8ae-d652-4f12-a915-9f15c50d5631" containerName="nova-manage" Mar 13 13:15:19.280600 master-0 kubenswrapper[28149]: I0313 13:15:19.279991 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 13:15:19.283078 master-0 kubenswrapper[28149]: I0313 13:15:19.282838 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 13:15:19.283722 master-0 kubenswrapper[28149]: I0313 13:15:19.283193 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 13 13:15:19.283722 master-0 kubenswrapper[28149]: I0313 13:15:19.283483 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 13 13:15:19.328388 master-0 kubenswrapper[28149]: I0313 13:15:19.304692 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8751eeff-49e7-416e-8f8e-037bc9e956e6-config-data\") pod \"8751eeff-49e7-416e-8f8e-037bc9e956e6\" (UID: \"8751eeff-49e7-416e-8f8e-037bc9e956e6\") " Mar 13 13:15:19.328388 master-0 kubenswrapper[28149]: I0313 13:15:19.304935 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8751eeff-49e7-416e-8f8e-037bc9e956e6-combined-ca-bundle\") pod \"8751eeff-49e7-416e-8f8e-037bc9e956e6\" (UID: \"8751eeff-49e7-416e-8f8e-037bc9e956e6\") " Mar 13 13:15:19.328814 master-0 kubenswrapper[28149]: I0313 13:15:19.328476 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cbjw\" (UniqueName: \"kubernetes.io/projected/8751eeff-49e7-416e-8f8e-037bc9e956e6-kube-api-access-4cbjw\") pod \"8751eeff-49e7-416e-8f8e-037bc9e956e6\" (UID: \"8751eeff-49e7-416e-8f8e-037bc9e956e6\") " Mar 13 13:15:19.328956 master-0 kubenswrapper[28149]: I0313 13:15:19.321971 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:15:19.329476 master-0 kubenswrapper[28149]: I0313 13:15:19.329414 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.329679 master-0 kubenswrapper[28149]: I0313 13:15:19.329656 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-public-tls-certs\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.329855 master-0 kubenswrapper[28149]: I0313 13:15:19.329813 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-logs\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.329934 master-0 kubenswrapper[28149]: I0313 13:15:19.329907 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-config-data\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.330022 master-0 kubenswrapper[28149]: I0313 13:15:19.330001 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.330131 master-0 kubenswrapper[28149]: I0313 13:15:19.330111 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5ftc\" (UniqueName: \"kubernetes.io/projected/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-kube-api-access-b5ftc\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.350644 master-0 kubenswrapper[28149]: I0313 13:15:19.346576 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8751eeff-49e7-416e-8f8e-037bc9e956e6-kube-api-access-4cbjw" (OuterVolumeSpecName: "kube-api-access-4cbjw") pod "8751eeff-49e7-416e-8f8e-037bc9e956e6" (UID: "8751eeff-49e7-416e-8f8e-037bc9e956e6"). InnerVolumeSpecName "kube-api-access-4cbjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:15:19.354326 master-0 kubenswrapper[28149]: I0313 13:15:19.351137 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8751eeff-49e7-416e-8f8e-037bc9e956e6-config-data" (OuterVolumeSpecName: "config-data") pod "8751eeff-49e7-416e-8f8e-037bc9e956e6" (UID: "8751eeff-49e7-416e-8f8e-037bc9e956e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:19.354643 master-0 kubenswrapper[28149]: I0313 13:15:19.351544 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8751eeff-49e7-416e-8f8e-037bc9e956e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8751eeff-49e7-416e-8f8e-037bc9e956e6" (UID: "8751eeff-49e7-416e-8f8e-037bc9e956e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:19.432544 master-0 kubenswrapper[28149]: I0313 13:15:19.432482 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.432773 master-0 kubenswrapper[28149]: I0313 13:15:19.432606 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-public-tls-certs\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.432773 master-0 kubenswrapper[28149]: I0313 13:15:19.432691 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-logs\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.432773 master-0 kubenswrapper[28149]: I0313 13:15:19.432734 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-config-data\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.432890 master-0 kubenswrapper[28149]: I0313 13:15:19.432777 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.432890 master-0 kubenswrapper[28149]: I0313 13:15:19.432823 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5ftc\" (UniqueName: \"kubernetes.io/projected/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-kube-api-access-b5ftc\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.435942 master-0 kubenswrapper[28149]: I0313 13:15:19.432887 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8751eeff-49e7-416e-8f8e-037bc9e956e6-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:19.435942 master-0 kubenswrapper[28149]: I0313 13:15:19.433468 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8751eeff-49e7-416e-8f8e-037bc9e956e6-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:19.435942 master-0 kubenswrapper[28149]: I0313 13:15:19.433482 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cbjw\" (UniqueName: \"kubernetes.io/projected/8751eeff-49e7-416e-8f8e-037bc9e956e6-kube-api-access-4cbjw\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:19.435942 master-0 kubenswrapper[28149]: I0313 13:15:19.434282 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-logs\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.437351 master-0 kubenswrapper[28149]: I0313 13:15:19.437318 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.438050 master-0 kubenswrapper[28149]: I0313 13:15:19.437999 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-config-data\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.438879 master-0 kubenswrapper[28149]: I0313 13:15:19.438751 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-public-tls-certs\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.438879 master-0 kubenswrapper[28149]: I0313 13:15:19.438800 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.451059 master-0 kubenswrapper[28149]: I0313 13:15:19.451008 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5ftc\" (UniqueName: \"kubernetes.io/projected/ea05ecb0-321c-40e5-bfe7-8bceb4cd103c-kube-api-access-b5ftc\") pod \"nova-api-0\" (UID: \"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c\") " pod="openstack/nova-api-0" Mar 13 13:15:19.621889 master-0 kubenswrapper[28149]: I0313 13:15:19.621703 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 13:15:19.804166 master-0 kubenswrapper[28149]: I0313 13:15:19.803896 28149 generic.go:334] "Generic (PLEG): container finished" podID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerID="e9c29f4ebf8c74e4fc60a77a1033351cbf9270544bef85bc6feb7f5be1c245b6" exitCode=143 Mar 13 13:15:19.804166 master-0 kubenswrapper[28149]: I0313 13:15:19.803992 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e579030e-e1cd-4dee-8a65-0e7a9b636974","Type":"ContainerDied","Data":"e9c29f4ebf8c74e4fc60a77a1033351cbf9270544bef85bc6feb7f5be1c245b6"} Mar 13 13:15:19.808575 master-0 kubenswrapper[28149]: I0313 13:15:19.808524 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8751eeff-49e7-416e-8f8e-037bc9e956e6","Type":"ContainerDied","Data":"0769b7dbd8234cf51235da9db6f777d650ff81e1ee934b61c686eb87e68df202"} Mar 13 13:15:19.808575 master-0 kubenswrapper[28149]: I0313 13:15:19.808569 28149 scope.go:117] "RemoveContainer" containerID="cc2bac34fe820d08bbd47cdca973999db27c61906cfb7d5b845736f73690704d" Mar 13 13:15:19.808988 master-0 kubenswrapper[28149]: I0313 13:15:19.808951 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 13:15:19.896554 master-0 kubenswrapper[28149]: I0313 13:15:19.896418 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 13:15:19.941518 master-0 kubenswrapper[28149]: I0313 13:15:19.941466 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 13:15:19.997386 master-0 kubenswrapper[28149]: I0313 13:15:19.997331 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 13:15:19.999251 master-0 kubenswrapper[28149]: I0313 13:15:19.999226 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 13:15:20.009278 master-0 kubenswrapper[28149]: I0313 13:15:20.001559 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 13 13:15:20.051169 master-0 kubenswrapper[28149]: I0313 13:15:20.050784 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b71b6af-46d8-43ec-8bd9-2898a1ee5f97-config-data\") pod \"nova-scheduler-0\" (UID: \"2b71b6af-46d8-43ec-8bd9-2898a1ee5f97\") " pod="openstack/nova-scheduler-0" Mar 13 13:15:20.051169 master-0 kubenswrapper[28149]: I0313 13:15:20.051021 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28wf8\" (UniqueName: \"kubernetes.io/projected/2b71b6af-46d8-43ec-8bd9-2898a1ee5f97-kube-api-access-28wf8\") pod \"nova-scheduler-0\" (UID: \"2b71b6af-46d8-43ec-8bd9-2898a1ee5f97\") " pod="openstack/nova-scheduler-0" Mar 13 13:15:20.051169 master-0 kubenswrapper[28149]: I0313 13:15:20.051087 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b71b6af-46d8-43ec-8bd9-2898a1ee5f97-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2b71b6af-46d8-43ec-8bd9-2898a1ee5f97\") " pod="openstack/nova-scheduler-0" Mar 13 13:15:20.051620 master-0 kubenswrapper[28149]: I0313 13:15:20.051567 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 13:15:20.105833 master-0 kubenswrapper[28149]: I0313 13:15:20.105763 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 13:15:20.155546 master-0 kubenswrapper[28149]: I0313 13:15:20.155395 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b71b6af-46d8-43ec-8bd9-2898a1ee5f97-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2b71b6af-46d8-43ec-8bd9-2898a1ee5f97\") " pod="openstack/nova-scheduler-0" Mar 13 13:15:20.155868 master-0 kubenswrapper[28149]: I0313 13:15:20.155585 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b71b6af-46d8-43ec-8bd9-2898a1ee5f97-config-data\") pod \"nova-scheduler-0\" (UID: \"2b71b6af-46d8-43ec-8bd9-2898a1ee5f97\") " pod="openstack/nova-scheduler-0" Mar 13 13:15:20.155997 master-0 kubenswrapper[28149]: I0313 13:15:20.155965 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28wf8\" (UniqueName: \"kubernetes.io/projected/2b71b6af-46d8-43ec-8bd9-2898a1ee5f97-kube-api-access-28wf8\") pod \"nova-scheduler-0\" (UID: \"2b71b6af-46d8-43ec-8bd9-2898a1ee5f97\") " pod="openstack/nova-scheduler-0" Mar 13 13:15:20.157153 master-0 kubenswrapper[28149]: I0313 13:15:20.157081 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b71b6af-46d8-43ec-8bd9-2898a1ee5f97-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2b71b6af-46d8-43ec-8bd9-2898a1ee5f97\") " pod="openstack/nova-scheduler-0" Mar 13 13:15:20.159600 master-0 kubenswrapper[28149]: I0313 13:15:20.158866 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b71b6af-46d8-43ec-8bd9-2898a1ee5f97-config-data\") pod \"nova-scheduler-0\" (UID: \"2b71b6af-46d8-43ec-8bd9-2898a1ee5f97\") " pod="openstack/nova-scheduler-0" Mar 13 13:15:20.175740 master-0 kubenswrapper[28149]: I0313 13:15:20.175678 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28wf8\" (UniqueName: \"kubernetes.io/projected/2b71b6af-46d8-43ec-8bd9-2898a1ee5f97-kube-api-access-28wf8\") pod \"nova-scheduler-0\" (UID: \"2b71b6af-46d8-43ec-8bd9-2898a1ee5f97\") " pod="openstack/nova-scheduler-0" Mar 13 13:15:20.321790 master-0 kubenswrapper[28149]: I0313 13:15:20.321725 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 13:15:20.710078 master-0 kubenswrapper[28149]: I0313 13:15:20.709921 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e9230e3-1138-4bd9-9054-8ed3044f1f6c" path="/var/lib/kubelet/pods/7e9230e3-1138-4bd9-9054-8ed3044f1f6c/volumes" Mar 13 13:15:20.711212 master-0 kubenswrapper[28149]: I0313 13:15:20.711170 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8751eeff-49e7-416e-8f8e-037bc9e956e6" path="/var/lib/kubelet/pods/8751eeff-49e7-416e-8f8e-037bc9e956e6/volumes" Mar 13 13:15:20.823198 master-0 kubenswrapper[28149]: I0313 13:15:20.823111 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c","Type":"ContainerStarted","Data":"3efe4131c2aca14eefaf705e40f17698ea6754a628f1e8f2caa52adaf2367f14"} Mar 13 13:15:20.823198 master-0 kubenswrapper[28149]: I0313 13:15:20.823193 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c","Type":"ContainerStarted","Data":"ca6d319d69da23764ab987ea05d277e7682fea50af86ce272db471eeedd972f1"} Mar 13 13:15:20.823198 master-0 kubenswrapper[28149]: I0313 13:15:20.823210 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ea05ecb0-321c-40e5-bfe7-8bceb4cd103c","Type":"ContainerStarted","Data":"ad4de363cefe4b30834c906230071fe1129c842528ec97281aee621e72889899"} Mar 13 13:15:20.929886 master-0 kubenswrapper[28149]: I0313 13:15:20.929802 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.92978173 podStartE2EDuration="1.92978173s" podCreationTimestamp="2026-03-13 13:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:15:20.920041627 +0000 UTC m=+1294.573506806" watchObservedRunningTime="2026-03-13 13:15:20.92978173 +0000 UTC m=+1294.583246889" Mar 13 13:15:20.996755 master-0 kubenswrapper[28149]: I0313 13:15:20.996700 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 13:15:21.852319 master-0 kubenswrapper[28149]: I0313 13:15:21.852263 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2b71b6af-46d8-43ec-8bd9-2898a1ee5f97","Type":"ContainerStarted","Data":"7573c9506022c566da8e0beec36c92ce2f0a3026085d9e917978641151689ed7"} Mar 13 13:15:21.852822 master-0 kubenswrapper[28149]: I0313 13:15:21.852329 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2b71b6af-46d8-43ec-8bd9-2898a1ee5f97","Type":"ContainerStarted","Data":"3595af18f675ffc9aa8f47fe4c1a174b7710dff0b18d5c12e48d15775e071857"} Mar 13 13:15:21.921934 master-0 kubenswrapper[28149]: I0313 13:15:21.921847 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.921818212 podStartE2EDuration="2.921818212s" podCreationTimestamp="2026-03-13 13:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:15:21.916132399 +0000 UTC m=+1295.569597558" watchObservedRunningTime="2026-03-13 13:15:21.921818212 +0000 UTC m=+1295.575283371" Mar 13 13:15:22.426979 master-0 kubenswrapper[28149]: I0313 13:15:22.426802 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": dial tcp 10.128.1.12:8775: connect: connection refused" Mar 13 13:15:22.426979 master-0 kubenswrapper[28149]: I0313 13:15:22.426836 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": dial tcp 10.128.1.12:8775: connect: connection refused" Mar 13 13:15:22.883496 master-0 kubenswrapper[28149]: I0313 13:15:22.869840 28149 generic.go:334] "Generic (PLEG): container finished" podID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerID="568dd59e04dd1335822f79eed749066323b9ec02f7b4c6056b3dbd19d0faddd8" exitCode=0 Mar 13 13:15:22.883496 master-0 kubenswrapper[28149]: I0313 13:15:22.869981 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e579030e-e1cd-4dee-8a65-0e7a9b636974","Type":"ContainerDied","Data":"568dd59e04dd1335822f79eed749066323b9ec02f7b4c6056b3dbd19d0faddd8"} Mar 13 13:15:23.212330 master-0 kubenswrapper[28149]: I0313 13:15:23.211702 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 13:15:23.246090 master-0 kubenswrapper[28149]: I0313 13:15:23.246045 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-config-data\") pod \"e579030e-e1cd-4dee-8a65-0e7a9b636974\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " Mar 13 13:15:23.247339 master-0 kubenswrapper[28149]: I0313 13:15:23.247308 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-nova-metadata-tls-certs\") pod \"e579030e-e1cd-4dee-8a65-0e7a9b636974\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " Mar 13 13:15:23.247870 master-0 kubenswrapper[28149]: I0313 13:15:23.247849 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fvgl\" (UniqueName: \"kubernetes.io/projected/e579030e-e1cd-4dee-8a65-0e7a9b636974-kube-api-access-6fvgl\") pod \"e579030e-e1cd-4dee-8a65-0e7a9b636974\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " Mar 13 13:15:23.248116 master-0 kubenswrapper[28149]: I0313 13:15:23.248098 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e579030e-e1cd-4dee-8a65-0e7a9b636974-logs\") pod \"e579030e-e1cd-4dee-8a65-0e7a9b636974\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " Mar 13 13:15:23.248303 master-0 kubenswrapper[28149]: I0313 13:15:23.248284 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-combined-ca-bundle\") pod \"e579030e-e1cd-4dee-8a65-0e7a9b636974\" (UID: \"e579030e-e1cd-4dee-8a65-0e7a9b636974\") " Mar 13 13:15:23.248689 master-0 kubenswrapper[28149]: I0313 13:15:23.248634 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e579030e-e1cd-4dee-8a65-0e7a9b636974-logs" (OuterVolumeSpecName: "logs") pod "e579030e-e1cd-4dee-8a65-0e7a9b636974" (UID: "e579030e-e1cd-4dee-8a65-0e7a9b636974"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:15:23.249602 master-0 kubenswrapper[28149]: I0313 13:15:23.249580 28149 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e579030e-e1cd-4dee-8a65-0e7a9b636974-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:23.251863 master-0 kubenswrapper[28149]: I0313 13:15:23.251775 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e579030e-e1cd-4dee-8a65-0e7a9b636974-kube-api-access-6fvgl" (OuterVolumeSpecName: "kube-api-access-6fvgl") pod "e579030e-e1cd-4dee-8a65-0e7a9b636974" (UID: "e579030e-e1cd-4dee-8a65-0e7a9b636974"). InnerVolumeSpecName "kube-api-access-6fvgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:15:23.278801 master-0 kubenswrapper[28149]: I0313 13:15:23.278735 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e579030e-e1cd-4dee-8a65-0e7a9b636974" (UID: "e579030e-e1cd-4dee-8a65-0e7a9b636974"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:23.279940 master-0 kubenswrapper[28149]: I0313 13:15:23.279918 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-config-data" (OuterVolumeSpecName: "config-data") pod "e579030e-e1cd-4dee-8a65-0e7a9b636974" (UID: "e579030e-e1cd-4dee-8a65-0e7a9b636974"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:23.304748 master-0 kubenswrapper[28149]: I0313 13:15:23.304700 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e579030e-e1cd-4dee-8a65-0e7a9b636974" (UID: "e579030e-e1cd-4dee-8a65-0e7a9b636974"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:15:23.352126 master-0 kubenswrapper[28149]: I0313 13:15:23.351988 28149 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:23.352861 master-0 kubenswrapper[28149]: I0313 13:15:23.352838 28149 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:23.352991 master-0 kubenswrapper[28149]: I0313 13:15:23.352976 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fvgl\" (UniqueName: \"kubernetes.io/projected/e579030e-e1cd-4dee-8a65-0e7a9b636974-kube-api-access-6fvgl\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:23.353090 master-0 kubenswrapper[28149]: I0313 13:15:23.353075 28149 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e579030e-e1cd-4dee-8a65-0e7a9b636974-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:15:23.886282 master-0 kubenswrapper[28149]: I0313 13:15:23.886200 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e579030e-e1cd-4dee-8a65-0e7a9b636974","Type":"ContainerDied","Data":"c20ffe658c4584dea980e7b403e0db881628da95d1a6b6134f2638269d1ff467"} Mar 13 13:15:23.886282 master-0 kubenswrapper[28149]: I0313 13:15:23.886272 28149 scope.go:117] "RemoveContainer" containerID="568dd59e04dd1335822f79eed749066323b9ec02f7b4c6056b3dbd19d0faddd8" Mar 13 13:15:23.887002 master-0 kubenswrapper[28149]: I0313 13:15:23.886272 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 13:15:23.922704 master-0 kubenswrapper[28149]: I0313 13:15:23.922642 28149 scope.go:117] "RemoveContainer" containerID="e9c29f4ebf8c74e4fc60a77a1033351cbf9270544bef85bc6feb7f5be1c245b6" Mar 13 13:15:23.961528 master-0 kubenswrapper[28149]: I0313 13:15:23.961469 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 13:15:23.999473 master-0 kubenswrapper[28149]: I0313 13:15:23.999353 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 13:15:24.053927 master-0 kubenswrapper[28149]: I0313 13:15:24.053834 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 13 13:15:24.055032 master-0 kubenswrapper[28149]: E0313 13:15:24.055009 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerName="nova-metadata-metadata" Mar 13 13:15:24.055134 master-0 kubenswrapper[28149]: I0313 13:15:24.055035 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerName="nova-metadata-metadata" Mar 13 13:15:24.055134 master-0 kubenswrapper[28149]: E0313 13:15:24.055125 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerName="nova-metadata-log" Mar 13 13:15:24.055134 master-0 kubenswrapper[28149]: I0313 13:15:24.055154 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerName="nova-metadata-log" Mar 13 13:15:24.055586 master-0 kubenswrapper[28149]: I0313 13:15:24.055535 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerName="nova-metadata-metadata" Mar 13 13:15:24.055586 master-0 kubenswrapper[28149]: I0313 13:15:24.055583 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="e579030e-e1cd-4dee-8a65-0e7a9b636974" containerName="nova-metadata-log" Mar 13 13:15:24.057358 master-0 kubenswrapper[28149]: I0313 13:15:24.057330 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 13:15:24.060154 master-0 kubenswrapper[28149]: I0313 13:15:24.060118 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 13 13:15:24.060504 master-0 kubenswrapper[28149]: I0313 13:15:24.060452 28149 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 13 13:15:24.068304 master-0 kubenswrapper[28149]: I0313 13:15:24.068250 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00baeeae-623a-4812-b4ef-69ef6bb38e46-logs\") pod \"nova-metadata-0\" (UID: \"00baeeae-623a-4812-b4ef-69ef6bb38e46\") " pod="openstack/nova-metadata-0" Mar 13 13:15:24.068437 master-0 kubenswrapper[28149]: I0313 13:15:24.068408 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00baeeae-623a-4812-b4ef-69ef6bb38e46-config-data\") pod \"nova-metadata-0\" (UID: \"00baeeae-623a-4812-b4ef-69ef6bb38e46\") " pod="openstack/nova-metadata-0" Mar 13 13:15:24.068758 master-0 kubenswrapper[28149]: I0313 13:15:24.068699 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/00baeeae-623a-4812-b4ef-69ef6bb38e46-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"00baeeae-623a-4812-b4ef-69ef6bb38e46\") " pod="openstack/nova-metadata-0" Mar 13 13:15:24.068966 master-0 kubenswrapper[28149]: I0313 13:15:24.068937 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00baeeae-623a-4812-b4ef-69ef6bb38e46-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"00baeeae-623a-4812-b4ef-69ef6bb38e46\") " pod="openstack/nova-metadata-0" Mar 13 13:15:24.069104 master-0 kubenswrapper[28149]: I0313 13:15:24.069078 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhksh\" (UniqueName: \"kubernetes.io/projected/00baeeae-623a-4812-b4ef-69ef6bb38e46-kube-api-access-nhksh\") pod \"nova-metadata-0\" (UID: \"00baeeae-623a-4812-b4ef-69ef6bb38e46\") " pod="openstack/nova-metadata-0" Mar 13 13:15:24.167941 master-0 kubenswrapper[28149]: I0313 13:15:24.167806 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 13:15:24.170667 master-0 kubenswrapper[28149]: I0313 13:15:24.170622 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00baeeae-623a-4812-b4ef-69ef6bb38e46-logs\") pod \"nova-metadata-0\" (UID: \"00baeeae-623a-4812-b4ef-69ef6bb38e46\") " pod="openstack/nova-metadata-0" Mar 13 13:15:24.170789 master-0 kubenswrapper[28149]: I0313 13:15:24.170761 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00baeeae-623a-4812-b4ef-69ef6bb38e46-config-data\") pod \"nova-metadata-0\" (UID: \"00baeeae-623a-4812-b4ef-69ef6bb38e46\") " pod="openstack/nova-metadata-0" Mar 13 13:15:24.170883 master-0 kubenswrapper[28149]: I0313 13:15:24.170849 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/00baeeae-623a-4812-b4ef-69ef6bb38e46-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"00baeeae-623a-4812-b4ef-69ef6bb38e46\") " pod="openstack/nova-metadata-0" Mar 13 13:15:24.170951 master-0 kubenswrapper[28149]: I0313 13:15:24.170913 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00baeeae-623a-4812-b4ef-69ef6bb38e46-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"00baeeae-623a-4812-b4ef-69ef6bb38e46\") " pod="openstack/nova-metadata-0" Mar 13 13:15:24.171007 master-0 kubenswrapper[28149]: I0313 13:15:24.170963 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhksh\" (UniqueName: \"kubernetes.io/projected/00baeeae-623a-4812-b4ef-69ef6bb38e46-kube-api-access-nhksh\") pod \"nova-metadata-0\" (UID: \"00baeeae-623a-4812-b4ef-69ef6bb38e46\") " pod="openstack/nova-metadata-0" Mar 13 13:15:24.172960 master-0 kubenswrapper[28149]: I0313 13:15:24.172914 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00baeeae-623a-4812-b4ef-69ef6bb38e46-logs\") pod \"nova-metadata-0\" (UID: \"00baeeae-623a-4812-b4ef-69ef6bb38e46\") " pod="openstack/nova-metadata-0" Mar 13 13:15:24.175257 master-0 kubenswrapper[28149]: I0313 13:15:24.175204 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/00baeeae-623a-4812-b4ef-69ef6bb38e46-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"00baeeae-623a-4812-b4ef-69ef6bb38e46\") " pod="openstack/nova-metadata-0" Mar 13 13:15:24.176398 master-0 kubenswrapper[28149]: I0313 13:15:24.176365 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00baeeae-623a-4812-b4ef-69ef6bb38e46-config-data\") pod \"nova-metadata-0\" (UID: \"00baeeae-623a-4812-b4ef-69ef6bb38e46\") " pod="openstack/nova-metadata-0" Mar 13 13:15:24.177254 master-0 kubenswrapper[28149]: I0313 13:15:24.177222 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00baeeae-623a-4812-b4ef-69ef6bb38e46-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"00baeeae-623a-4812-b4ef-69ef6bb38e46\") " pod="openstack/nova-metadata-0" Mar 13 13:15:24.205329 master-0 kubenswrapper[28149]: I0313 13:15:24.205064 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhksh\" (UniqueName: \"kubernetes.io/projected/00baeeae-623a-4812-b4ef-69ef6bb38e46-kube-api-access-nhksh\") pod \"nova-metadata-0\" (UID: \"00baeeae-623a-4812-b4ef-69ef6bb38e46\") " pod="openstack/nova-metadata-0" Mar 13 13:15:24.395821 master-0 kubenswrapper[28149]: I0313 13:15:24.395659 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 13:15:24.714890 master-0 kubenswrapper[28149]: I0313 13:15:24.711862 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e579030e-e1cd-4dee-8a65-0e7a9b636974" path="/var/lib/kubelet/pods/e579030e-e1cd-4dee-8a65-0e7a9b636974/volumes" Mar 13 13:15:24.956398 master-0 kubenswrapper[28149]: W0313 13:15:24.956313 28149 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00baeeae_623a_4812_b4ef_69ef6bb38e46.slice/crio-04aa94cf90f35b6e21a61d21164098723ae4abbb3cf1cbd12994f0edf8ded4f2 WatchSource:0}: Error finding container 04aa94cf90f35b6e21a61d21164098723ae4abbb3cf1cbd12994f0edf8ded4f2: Status 404 returned error can't find the container with id 04aa94cf90f35b6e21a61d21164098723ae4abbb3cf1cbd12994f0edf8ded4f2 Mar 13 13:15:24.967189 master-0 kubenswrapper[28149]: I0313 13:15:24.966899 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 13:15:25.325175 master-0 kubenswrapper[28149]: I0313 13:15:25.323204 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 13 13:15:25.921879 master-0 kubenswrapper[28149]: I0313 13:15:25.921819 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"00baeeae-623a-4812-b4ef-69ef6bb38e46","Type":"ContainerStarted","Data":"bcdc1ba93da05ed78ee8b800fc23b7a279b86f7aa24a13f42d83820d20363a52"} Mar 13 13:15:25.922266 master-0 kubenswrapper[28149]: I0313 13:15:25.922246 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"00baeeae-623a-4812-b4ef-69ef6bb38e46","Type":"ContainerStarted","Data":"e131bf9088d3c9aaf0809fa80cc8ca213b79f05e82ede979ba1d6ee422d8a4d4"} Mar 13 13:15:25.922347 master-0 kubenswrapper[28149]: I0313 13:15:25.922333 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"00baeeae-623a-4812-b4ef-69ef6bb38e46","Type":"ContainerStarted","Data":"04aa94cf90f35b6e21a61d21164098723ae4abbb3cf1cbd12994f0edf8ded4f2"} Mar 13 13:15:25.963304 master-0 kubenswrapper[28149]: I0313 13:15:25.963160 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.963112638 podStartE2EDuration="2.963112638s" podCreationTimestamp="2026-03-13 13:15:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:15:25.955534374 +0000 UTC m=+1299.608999543" watchObservedRunningTime="2026-03-13 13:15:25.963112638 +0000 UTC m=+1299.616577797" Mar 13 13:15:29.396812 master-0 kubenswrapper[28149]: I0313 13:15:29.396694 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 13:15:29.396812 master-0 kubenswrapper[28149]: I0313 13:15:29.396816 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 13:15:29.623516 master-0 kubenswrapper[28149]: I0313 13:15:29.623450 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 13:15:29.623516 master-0 kubenswrapper[28149]: I0313 13:15:29.623508 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 13:15:30.323660 master-0 kubenswrapper[28149]: I0313 13:15:30.323571 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 13 13:15:30.353372 master-0 kubenswrapper[28149]: I0313 13:15:30.353302 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 13 13:15:30.644571 master-0 kubenswrapper[28149]: I0313 13:15:30.644423 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ea05ecb0-321c-40e5-bfe7-8bceb4cd103c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.18:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:30.645322 master-0 kubenswrapper[28149]: I0313 13:15:30.644430 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ea05ecb0-321c-40e5-bfe7-8bceb4cd103c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.18:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:31.021712 master-0 kubenswrapper[28149]: I0313 13:15:31.021645 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 13 13:15:34.579703 master-0 kubenswrapper[28149]: I0313 13:15:34.573290 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 13:15:34.579703 master-0 kubenswrapper[28149]: I0313 13:15:34.573367 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 13:15:35.903520 master-0 kubenswrapper[28149]: I0313 13:15:35.903445 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="00baeeae-623a-4812-b4ef-69ef6bb38e46" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.20:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:35.904542 master-0 kubenswrapper[28149]: I0313 13:15:35.904154 28149 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="00baeeae-623a-4812-b4ef-69ef6bb38e46" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.20:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 13:15:39.632413 master-0 kubenswrapper[28149]: I0313 13:15:39.632346 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 13:15:39.633177 master-0 kubenswrapper[28149]: I0313 13:15:39.632873 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 13:15:39.663802 master-0 kubenswrapper[28149]: I0313 13:15:39.663641 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 13:15:39.666098 master-0 kubenswrapper[28149]: I0313 13:15:39.664909 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 13:15:40.542290 master-0 kubenswrapper[28149]: I0313 13:15:40.542228 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 13:15:40.549535 master-0 kubenswrapper[28149]: I0313 13:15:40.549495 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 13:15:44.402086 master-0 kubenswrapper[28149]: I0313 13:15:44.402028 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 13:15:44.406000 master-0 kubenswrapper[28149]: I0313 13:15:44.404117 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 13:15:44.408061 master-0 kubenswrapper[28149]: I0313 13:15:44.408014 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 13:15:44.621789 master-0 kubenswrapper[28149]: I0313 13:15:44.621673 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 13:15:58.077476 master-0 kubenswrapper[28149]: I0313 13:15:58.077420 28149 scope.go:117] "RemoveContainer" containerID="fd0f55704d8e529fe60229924700911c0c59e1ba7817dbab931e76e86defa07e" Mar 13 13:16:13.391409 master-0 kubenswrapper[28149]: I0313 13:16:13.391293 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-8sk9j"] Mar 13 13:16:13.392853 master-0 kubenswrapper[28149]: I0313 13:16:13.391593 28149 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" podUID="0728440d-287f-4cc8-bbc0-a00845e4ca8a" containerName="sushy-emulator" containerID="cri-o://4a07511656e57b28671840a7b61ceaf462ff0345356d1547bd1f4f899e61d31b" gracePeriod=30 Mar 13 13:16:14.869409 master-0 kubenswrapper[28149]: I0313 13:16:14.869331 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:16:14.920450 master-0 kubenswrapper[28149]: I0313 13:16:14.919649 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0728440d-287f-4cc8-bbc0-a00845e4ca8a-os-client-config\") pod \"0728440d-287f-4cc8-bbc0-a00845e4ca8a\" (UID: \"0728440d-287f-4cc8-bbc0-a00845e4ca8a\") " Mar 13 13:16:14.920450 master-0 kubenswrapper[28149]: I0313 13:16:14.919792 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/0728440d-287f-4cc8-bbc0-a00845e4ca8a-sushy-emulator-config\") pod \"0728440d-287f-4cc8-bbc0-a00845e4ca8a\" (UID: \"0728440d-287f-4cc8-bbc0-a00845e4ca8a\") " Mar 13 13:16:14.920450 master-0 kubenswrapper[28149]: I0313 13:16:14.919833 28149 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wvsg\" (UniqueName: \"kubernetes.io/projected/0728440d-287f-4cc8-bbc0-a00845e4ca8a-kube-api-access-4wvsg\") pod \"0728440d-287f-4cc8-bbc0-a00845e4ca8a\" (UID: \"0728440d-287f-4cc8-bbc0-a00845e4ca8a\") " Mar 13 13:16:14.931177 master-0 kubenswrapper[28149]: I0313 13:16:14.928794 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0728440d-287f-4cc8-bbc0-a00845e4ca8a-kube-api-access-4wvsg" (OuterVolumeSpecName: "kube-api-access-4wvsg") pod "0728440d-287f-4cc8-bbc0-a00845e4ca8a" (UID: "0728440d-287f-4cc8-bbc0-a00845e4ca8a"). InnerVolumeSpecName "kube-api-access-4wvsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:16:14.936542 master-0 kubenswrapper[28149]: I0313 13:16:14.936059 28149 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wvsg\" (UniqueName: \"kubernetes.io/projected/0728440d-287f-4cc8-bbc0-a00845e4ca8a-kube-api-access-4wvsg\") on node \"master-0\" DevicePath \"\"" Mar 13 13:16:14.936542 master-0 kubenswrapper[28149]: I0313 13:16:14.936059 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0728440d-287f-4cc8-bbc0-a00845e4ca8a-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "0728440d-287f-4cc8-bbc0-a00845e4ca8a" (UID: "0728440d-287f-4cc8-bbc0-a00845e4ca8a"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:16:14.939631 master-0 kubenswrapper[28149]: I0313 13:16:14.937973 28149 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0728440d-287f-4cc8-bbc0-a00845e4ca8a-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "0728440d-287f-4cc8-bbc0-a00845e4ca8a" (UID: "0728440d-287f-4cc8-bbc0-a00845e4ca8a"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:16:15.040175 master-0 kubenswrapper[28149]: I0313 13:16:15.039539 28149 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0728440d-287f-4cc8-bbc0-a00845e4ca8a-os-client-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:16:15.040175 master-0 kubenswrapper[28149]: I0313 13:16:15.039579 28149 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/0728440d-287f-4cc8-bbc0-a00845e4ca8a-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:16:15.070176 master-0 kubenswrapper[28149]: I0313 13:16:15.066527 28149 generic.go:334] "Generic (PLEG): container finished" podID="0728440d-287f-4cc8-bbc0-a00845e4ca8a" containerID="4a07511656e57b28671840a7b61ceaf462ff0345356d1547bd1f4f899e61d31b" exitCode=0 Mar 13 13:16:15.070176 master-0 kubenswrapper[28149]: I0313 13:16:15.066597 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" event={"ID":"0728440d-287f-4cc8-bbc0-a00845e4ca8a","Type":"ContainerDied","Data":"4a07511656e57b28671840a7b61ceaf462ff0345356d1547bd1f4f899e61d31b"} Mar 13 13:16:15.070176 master-0 kubenswrapper[28149]: I0313 13:16:15.066645 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" event={"ID":"0728440d-287f-4cc8-bbc0-a00845e4ca8a","Type":"ContainerDied","Data":"152c076fbc0d0a50c127a74722344fca80a7bc0130661c241c88aff10a6b77ae"} Mar 13 13:16:15.070176 master-0 kubenswrapper[28149]: I0313 13:16:15.066669 28149 scope.go:117] "RemoveContainer" containerID="4a07511656e57b28671840a7b61ceaf462ff0345356d1547bd1f4f899e61d31b" Mar 13 13:16:15.070176 master-0 kubenswrapper[28149]: I0313 13:16:15.066899 28149 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-8sk9j" Mar 13 13:16:15.124164 master-0 kubenswrapper[28149]: I0313 13:16:15.119972 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b"] Mar 13 13:16:15.124164 master-0 kubenswrapper[28149]: E0313 13:16:15.120650 28149 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0728440d-287f-4cc8-bbc0-a00845e4ca8a" containerName="sushy-emulator" Mar 13 13:16:15.124164 master-0 kubenswrapper[28149]: I0313 13:16:15.120683 28149 state_mem.go:107] "Deleted CPUSet assignment" podUID="0728440d-287f-4cc8-bbc0-a00845e4ca8a" containerName="sushy-emulator" Mar 13 13:16:15.124164 master-0 kubenswrapper[28149]: I0313 13:16:15.121000 28149 memory_manager.go:354] "RemoveStaleState removing state" podUID="0728440d-287f-4cc8-bbc0-a00845e4ca8a" containerName="sushy-emulator" Mar 13 13:16:15.124164 master-0 kubenswrapper[28149]: I0313 13:16:15.121981 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" Mar 13 13:16:15.132172 master-0 kubenswrapper[28149]: I0313 13:16:15.130746 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Mar 13 13:16:15.343192 master-0 kubenswrapper[28149]: I0313 13:16:15.343126 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/990196df-fc8c-4a28-a71e-f4576372cf90-sushy-emulator-config\") pod \"sushy-emulator-54b65fbdd6-4bk7b\" (UID: \"990196df-fc8c-4a28-a71e-f4576372cf90\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" Mar 13 13:16:15.343620 master-0 kubenswrapper[28149]: I0313 13:16:15.343291 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgwvp\" (UniqueName: \"kubernetes.io/projected/990196df-fc8c-4a28-a71e-f4576372cf90-kube-api-access-wgwvp\") pod \"sushy-emulator-54b65fbdd6-4bk7b\" (UID: \"990196df-fc8c-4a28-a71e-f4576372cf90\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" Mar 13 13:16:15.343620 master-0 kubenswrapper[28149]: I0313 13:16:15.343435 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/990196df-fc8c-4a28-a71e-f4576372cf90-os-client-config\") pod \"sushy-emulator-54b65fbdd6-4bk7b\" (UID: \"990196df-fc8c-4a28-a71e-f4576372cf90\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" Mar 13 13:16:15.355248 master-0 kubenswrapper[28149]: I0313 13:16:15.354009 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b"] Mar 13 13:16:15.452170 master-0 kubenswrapper[28149]: I0313 13:16:15.446298 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/990196df-fc8c-4a28-a71e-f4576372cf90-os-client-config\") pod \"sushy-emulator-54b65fbdd6-4bk7b\" (UID: \"990196df-fc8c-4a28-a71e-f4576372cf90\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" Mar 13 13:16:15.452170 master-0 kubenswrapper[28149]: I0313 13:16:15.446461 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/990196df-fc8c-4a28-a71e-f4576372cf90-sushy-emulator-config\") pod \"sushy-emulator-54b65fbdd6-4bk7b\" (UID: \"990196df-fc8c-4a28-a71e-f4576372cf90\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" Mar 13 13:16:15.452170 master-0 kubenswrapper[28149]: I0313 13:16:15.446563 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgwvp\" (UniqueName: \"kubernetes.io/projected/990196df-fc8c-4a28-a71e-f4576372cf90-kube-api-access-wgwvp\") pod \"sushy-emulator-54b65fbdd6-4bk7b\" (UID: \"990196df-fc8c-4a28-a71e-f4576372cf90\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" Mar 13 13:16:15.452170 master-0 kubenswrapper[28149]: I0313 13:16:15.452101 28149 scope.go:117] "RemoveContainer" containerID="4a07511656e57b28671840a7b61ceaf462ff0345356d1547bd1f4f899e61d31b" Mar 13 13:16:15.456169 master-0 kubenswrapper[28149]: E0313 13:16:15.454831 28149 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a07511656e57b28671840a7b61ceaf462ff0345356d1547bd1f4f899e61d31b\": container with ID starting with 4a07511656e57b28671840a7b61ceaf462ff0345356d1547bd1f4f899e61d31b not found: ID does not exist" containerID="4a07511656e57b28671840a7b61ceaf462ff0345356d1547bd1f4f899e61d31b" Mar 13 13:16:15.456169 master-0 kubenswrapper[28149]: I0313 13:16:15.454893 28149 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a07511656e57b28671840a7b61ceaf462ff0345356d1547bd1f4f899e61d31b"} err="failed to get container status \"4a07511656e57b28671840a7b61ceaf462ff0345356d1547bd1f4f899e61d31b\": rpc error: code = NotFound desc = could not find container \"4a07511656e57b28671840a7b61ceaf462ff0345356d1547bd1f4f899e61d31b\": container with ID starting with 4a07511656e57b28671840a7b61ceaf462ff0345356d1547bd1f4f899e61d31b not found: ID does not exist" Mar 13 13:16:15.472165 master-0 kubenswrapper[28149]: I0313 13:16:15.470105 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/990196df-fc8c-4a28-a71e-f4576372cf90-sushy-emulator-config\") pod \"sushy-emulator-54b65fbdd6-4bk7b\" (UID: \"990196df-fc8c-4a28-a71e-f4576372cf90\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" Mar 13 13:16:15.472165 master-0 kubenswrapper[28149]: I0313 13:16:15.471972 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/990196df-fc8c-4a28-a71e-f4576372cf90-os-client-config\") pod \"sushy-emulator-54b65fbdd6-4bk7b\" (UID: \"990196df-fc8c-4a28-a71e-f4576372cf90\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" Mar 13 13:16:15.522171 master-0 kubenswrapper[28149]: I0313 13:16:15.519001 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgwvp\" (UniqueName: \"kubernetes.io/projected/990196df-fc8c-4a28-a71e-f4576372cf90-kube-api-access-wgwvp\") pod \"sushy-emulator-54b65fbdd6-4bk7b\" (UID: \"990196df-fc8c-4a28-a71e-f4576372cf90\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" Mar 13 13:16:15.546166 master-0 kubenswrapper[28149]: I0313 13:16:15.544303 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-8sk9j"] Mar 13 13:16:15.591177 master-0 kubenswrapper[28149]: I0313 13:16:15.589218 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-8sk9j"] Mar 13 13:16:15.892681 master-0 kubenswrapper[28149]: I0313 13:16:15.890968 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" Mar 13 13:16:16.578623 master-0 kubenswrapper[28149]: I0313 13:16:16.578568 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b"] Mar 13 13:16:16.719255 master-0 kubenswrapper[28149]: I0313 13:16:16.717373 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0728440d-287f-4cc8-bbc0-a00845e4ca8a" path="/var/lib/kubelet/pods/0728440d-287f-4cc8-bbc0-a00845e4ca8a/volumes" Mar 13 13:16:17.102493 master-0 kubenswrapper[28149]: I0313 13:16:17.102378 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" event={"ID":"990196df-fc8c-4a28-a71e-f4576372cf90","Type":"ContainerStarted","Data":"eb0b22ff974a06e660fefc6c75e7fcb2033a449a6f66ba6181da057a6042b31e"} Mar 13 13:16:17.102493 master-0 kubenswrapper[28149]: I0313 13:16:17.102480 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" event={"ID":"990196df-fc8c-4a28-a71e-f4576372cf90","Type":"ContainerStarted","Data":"bed9182b1838e0c9d8349735f50856840bb38467fddc1224c409f5eb63c59144"} Mar 13 13:16:17.131779 master-0 kubenswrapper[28149]: I0313 13:16:17.131663 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" podStartSLOduration=3.131631445 podStartE2EDuration="3.131631445s" podCreationTimestamp="2026-03-13 13:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:16:17.125910931 +0000 UTC m=+1350.779376110" watchObservedRunningTime="2026-03-13 13:16:17.131631445 +0000 UTC m=+1350.785096604" Mar 13 13:16:25.891747 master-0 kubenswrapper[28149]: I0313 13:16:25.891682 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" Mar 13 13:16:25.891747 master-0 kubenswrapper[28149]: I0313 13:16:25.891753 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" Mar 13 13:16:25.902229 master-0 kubenswrapper[28149]: I0313 13:16:25.902181 28149 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" Mar 13 13:16:26.222657 master-0 kubenswrapper[28149]: I0313 13:16:26.222535 28149 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-54b65fbdd6-4bk7b" Mar 13 13:16:44.574447 master-0 kubenswrapper[28149]: I0313 13:16:44.573867 28149 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-9x9vk" podUID="4f9e6618-62b5-4181-b545-211461811140" containerName="registry-server" probeResult="failure" output=< Mar 13 13:16:44.574447 master-0 kubenswrapper[28149]: timeout: health rpc did not complete within 1s Mar 13 13:16:44.574447 master-0 kubenswrapper[28149]: > Mar 13 13:16:58.264214 master-0 kubenswrapper[28149]: I0313 13:16:58.263612 28149 scope.go:117] "RemoveContainer" containerID="4afcd4d7e11775e451eba815b9d0a93141819eb87818991b5f59410f42e00c3f" Mar 13 13:16:58.314495 master-0 kubenswrapper[28149]: I0313 13:16:58.311530 28149 scope.go:117] "RemoveContainer" containerID="ae23445f6ac8cb92903d34a401e1012fde32e867514ef39f42e7ddcc892a0a9f" Mar 13 13:16:58.343165 master-0 kubenswrapper[28149]: I0313 13:16:58.342444 28149 scope.go:117] "RemoveContainer" containerID="6549c3175765d42b7e813efabb0a1f0603ba4f1d4615804beb48b2ef2bc7accb" Mar 13 13:16:58.380730 master-0 kubenswrapper[28149]: I0313 13:16:58.380614 28149 scope.go:117] "RemoveContainer" containerID="9e9d3431ab722a51a8be3b8442bbcf269a950063e2d61434dfd3760b4c3ccf4f" Mar 13 13:16:58.412130 master-0 kubenswrapper[28149]: I0313 13:16:58.412085 28149 scope.go:117] "RemoveContainer" containerID="2f0ab736eae25f82c43a5d9e5e54f1c5c9b7bb1b519213efb5be7a4963b6b941" Mar 13 13:17:58.578274 master-0 kubenswrapper[28149]: I0313 13:17:58.578128 28149 scope.go:117] "RemoveContainer" containerID="9526c59f3a2cba871431711a9cbbc5eb3ada1bdd4f300b13618a2aaf534ea349" Mar 13 13:17:58.629551 master-0 kubenswrapper[28149]: I0313 13:17:58.629499 28149 scope.go:117] "RemoveContainer" containerID="ef9db93a43a4bcf2a58d8eaf7c496a10a9961fb641f216e5b2f84182384e6287" Mar 13 13:17:58.663207 master-0 kubenswrapper[28149]: I0313 13:17:58.663165 28149 scope.go:117] "RemoveContainer" containerID="7bb569460a6f2eb1cef8e8cce8c284a41fd3b9d31bee3e4adf73f698d6c89770" Mar 13 13:17:58.905526 master-0 kubenswrapper[28149]: I0313 13:17:58.905479 28149 scope.go:117] "RemoveContainer" containerID="33c31329400496fed349596fa5f92f2238ea2da5908785e42511946da5e90bb2" Mar 13 13:19:02.482386 master-0 kubenswrapper[28149]: I0313 13:19:02.482284 28149 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-ee0a2-backup-0" podUID="aeef3e73-d29d-456c-a41a-8f478df6e975" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.128.0.240:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:19:03.494871 master-0 kubenswrapper[28149]: I0313 13:19:03.493447 28149 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-ee0a2-scheduler-0" podUID="db83bac9-e722-4e4f-aad6-eba4fdcbaedb" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.128.0.236:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:19:05.067207 master-0 kubenswrapper[28149]: I0313 13:19:05.065741 28149 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-ee0a2-volume-lvm-iscsi-0" podUID="b0b0e08b-0a29-40e5-9cd6-3609aa630650" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.128.0.239:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:19:06.688729 master-0 kubenswrapper[28149]: I0313 13:19:06.676302 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" podUID="57e83807-c598-4f45-b92a-e017a07b6997" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.148:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:19:06.688729 master-0 kubenswrapper[28149]: I0313 13:19:06.676788 28149 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" podUID="57e83807-c598-4f45-b92a-e017a07b6997" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.148:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:19:07.917162 master-0 kubenswrapper[28149]: I0313 13:19:07.916745 28149 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-t8fb4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 13:19:08.166423 master-0 kubenswrapper[28149]: I0313 13:19:07.963495 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-t8fb4" podUID="f0803181-4e37-43fa-8ddc-9c76d3f61817" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 13:19:08.182836 master-0 kubenswrapper[28149]: I0313 13:19:07.963594 28149 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-49bct container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.128.0.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 13:19:08.182836 master-0 kubenswrapper[28149]: I0313 13:19:08.182761 28149 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-49bct" podUID="46e88d6d-6585-43dd-8fc3-2165ad505385" containerName="perses-operator" probeResult="failure" output="Get \"http://10.128.0.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:19:08.239377 master-0 kubenswrapper[28149]: I0313 13:19:08.238629 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-b9b5ddc8d-wj5zb" podUID="eee25312-b8a7-43f4-9ec9-96c1fadd4960" containerName="webhook-server" probeResult="failure" output="Get \"http://10.128.0.121:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:19:08.556192 master-0 kubenswrapper[28149]: I0313 13:19:08.253978 28149 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-49bct container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.128.0.128:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 13:19:08.556192 master-0 kubenswrapper[28149]: I0313 13:19:08.254048 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-49bct" podUID="46e88d6d-6585-43dd-8fc3-2165ad505385" containerName="perses-operator" probeResult="failure" output="Get \"http://10.128.0.128:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:19:08.556192 master-0 kubenswrapper[28149]: I0313 13:19:08.553523 28149 trace.go:236] Trace[469877295]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/community-operators-9x9vk" (13-Mar-2026 13:19:06.662) (total time: 1890ms): Mar 13 13:19:08.556192 master-0 kubenswrapper[28149]: Trace[469877295]: [1.890916444s] [1.890916444s] END Mar 13 13:19:08.562127 master-0 kubenswrapper[28149]: I0313 13:19:08.560798 28149 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-ee0a2-backup-0" podUID="aeef3e73-d29d-456c-a41a-8f478df6e975" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.128.0.240:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:19:08.607419 master-0 kubenswrapper[28149]: I0313 13:19:08.604693 28149 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-ee0a2-scheduler-0" podUID="db83bac9-e722-4e4f-aad6-eba4fdcbaedb" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.128.0.236:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:19:08.725119 master-0 kubenswrapper[28149]: E0313 13:19:08.724987 28149 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.903s" Mar 13 13:19:56.093800 master-0 kubenswrapper[28149]: I0313 13:19:56.093723 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-k5hkp"] Mar 13 13:19:56.106701 master-0 kubenswrapper[28149]: I0313 13:19:56.106564 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7c6e-account-create-update-4tgk8"] Mar 13 13:19:56.122792 master-0 kubenswrapper[28149]: I0313 13:19:56.122727 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-45kqr"] Mar 13 13:19:56.351599 master-0 kubenswrapper[28149]: I0313 13:19:56.351464 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f29e-account-create-update-z7cvh"] Mar 13 13:19:56.367215 master-0 kubenswrapper[28149]: I0313 13:19:56.366070 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-45kqr"] Mar 13 13:19:56.377620 master-0 kubenswrapper[28149]: I0313 13:19:56.377556 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7c6e-account-create-update-4tgk8"] Mar 13 13:19:56.388799 master-0 kubenswrapper[28149]: I0313 13:19:56.388743 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-k5hkp"] Mar 13 13:19:56.405350 master-0 kubenswrapper[28149]: I0313 13:19:56.399524 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f29e-account-create-update-z7cvh"] Mar 13 13:19:56.705917 master-0 kubenswrapper[28149]: I0313 13:19:56.705629 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c292f4-ce11-41f5-b9ff-7a40e65cf085" path="/var/lib/kubelet/pods/08c292f4-ce11-41f5-b9ff-7a40e65cf085/volumes" Mar 13 13:19:56.707477 master-0 kubenswrapper[28149]: I0313 13:19:56.707433 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4" path="/var/lib/kubelet/pods/11ae55ca-4c2e-47f9-9c32-96a9ee19a4e4/volumes" Mar 13 13:19:56.709455 master-0 kubenswrapper[28149]: I0313 13:19:56.709411 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="518ac108-6964-45cd-af8a-d2e8d98cdb39" path="/var/lib/kubelet/pods/518ac108-6964-45cd-af8a-d2e8d98cdb39/volumes" Mar 13 13:19:56.710452 master-0 kubenswrapper[28149]: I0313 13:19:56.710377 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="805cfa21-0ee8-4da5-9a9a-1cf852f868c7" path="/var/lib/kubelet/pods/805cfa21-0ee8-4da5-9a9a-1cf852f868c7/volumes" Mar 13 13:19:57.190289 master-0 kubenswrapper[28149]: I0313 13:19:57.190225 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2vrz2"] Mar 13 13:19:57.200891 master-0 kubenswrapper[28149]: I0313 13:19:57.200826 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2vrz2"] Mar 13 13:19:58.050070 master-0 kubenswrapper[28149]: I0313 13:19:58.049978 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4f01-account-create-update-wcwmp"] Mar 13 13:19:58.065960 master-0 kubenswrapper[28149]: I0313 13:19:58.065828 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4f01-account-create-update-wcwmp"] Mar 13 13:19:58.708942 master-0 kubenswrapper[28149]: I0313 13:19:58.708875 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d3ef396-7b26-4828-98c2-3d3acd135ed6" path="/var/lib/kubelet/pods/5d3ef396-7b26-4828-98c2-3d3acd135ed6/volumes" Mar 13 13:19:58.710043 master-0 kubenswrapper[28149]: I0313 13:19:58.709875 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e39d6bc-2e33-484e-ac03-f7b1bb0352c8" path="/var/lib/kubelet/pods/5e39d6bc-2e33-484e-ac03-f7b1bb0352c8/volumes" Mar 13 13:19:59.007755 master-0 kubenswrapper[28149]: I0313 13:19:59.007633 28149 scope.go:117] "RemoveContainer" containerID="81cac9f39ac0d0fef197556c6635b0b5934ce49c2079ffaab3bcef42ff8b37df" Mar 13 13:19:59.042991 master-0 kubenswrapper[28149]: I0313 13:19:59.042934 28149 scope.go:117] "RemoveContainer" containerID="3d0a4a0185e798f6fc54f3fbdcda4ec68ec99dfff280faac11b95eba9ef1cfed" Mar 13 13:19:59.067267 master-0 kubenswrapper[28149]: I0313 13:19:59.067197 28149 scope.go:117] "RemoveContainer" containerID="407957e257e8da1a98a5c817d538a13906dc183b411240d2308cc40c356bf613" Mar 13 13:19:59.097232 master-0 kubenswrapper[28149]: I0313 13:19:59.097128 28149 scope.go:117] "RemoveContainer" containerID="520d807536cb81674b261119b513a944bc9aecd64b99e8d30ab702a455e478c7" Mar 13 13:19:59.123923 master-0 kubenswrapper[28149]: I0313 13:19:59.123876 28149 scope.go:117] "RemoveContainer" containerID="de05d0db1863604656929a4f8ed7334e5b3f3ade2693a775963b621247b0a13a" Mar 13 13:19:59.153687 master-0 kubenswrapper[28149]: I0313 13:19:59.153569 28149 scope.go:117] "RemoveContainer" containerID="8cea29442a555e878214d7982407688fc382effeb13a0eec24c534dfebd82ca1" Mar 13 13:20:03.045649 master-0 kubenswrapper[28149]: I0313 13:20:03.045573 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rkdmk"] Mar 13 13:20:03.058344 master-0 kubenswrapper[28149]: I0313 13:20:03.058278 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rkdmk"] Mar 13 13:20:04.703449 master-0 kubenswrapper[28149]: I0313 13:20:04.703360 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="220bdc89-22fc-4966-847c-550dad12dd5a" path="/var/lib/kubelet/pods/220bdc89-22fc-4966-847c-550dad12dd5a/volumes" Mar 13 13:20:33.066132 master-0 kubenswrapper[28149]: I0313 13:20:33.066070 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-b4zxh"] Mar 13 13:20:33.082691 master-0 kubenswrapper[28149]: I0313 13:20:33.082635 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-1148-account-create-update-ggwcs"] Mar 13 13:20:33.098663 master-0 kubenswrapper[28149]: I0313 13:20:33.098571 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-h89wz"] Mar 13 13:20:33.111362 master-0 kubenswrapper[28149]: I0313 13:20:33.111320 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-85ac-account-create-update-x79xz"] Mar 13 13:20:33.124480 master-0 kubenswrapper[28149]: I0313 13:20:33.124418 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-b4zxh"] Mar 13 13:20:33.136982 master-0 kubenswrapper[28149]: I0313 13:20:33.136892 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-h89wz"] Mar 13 13:20:33.151841 master-0 kubenswrapper[28149]: I0313 13:20:33.151799 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-1148-account-create-update-ggwcs"] Mar 13 13:20:33.165836 master-0 kubenswrapper[28149]: I0313 13:20:33.165777 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-85ac-account-create-update-x79xz"] Mar 13 13:20:35.222543 master-0 kubenswrapper[28149]: I0313 13:20:35.222474 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170af500-fab8-49d0-83fb-16fa86431761" path="/var/lib/kubelet/pods/170af500-fab8-49d0-83fb-16fa86431761/volumes" Mar 13 13:20:35.227313 master-0 kubenswrapper[28149]: I0313 13:20:35.227271 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="459c48e6-39bc-4241-9810-a203d2cde587" path="/var/lib/kubelet/pods/459c48e6-39bc-4241-9810-a203d2cde587/volumes" Mar 13 13:20:35.229311 master-0 kubenswrapper[28149]: I0313 13:20:35.229161 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb821b74-acb1-49cc-8240-9eb3e2626153" path="/var/lib/kubelet/pods/bb821b74-acb1-49cc-8240-9eb3e2626153/volumes" Mar 13 13:20:35.232935 master-0 kubenswrapper[28149]: I0313 13:20:35.232867 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac20de4-3e4b-4934-b153-aff181b435de" path="/var/lib/kubelet/pods/fac20de4-3e4b-4934-b153-aff181b435de/volumes" Mar 13 13:20:35.809442 master-0 kubenswrapper[28149]: I0313 13:20:35.809384 28149 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-q7d6n" podUID="57e83807-c598-4f45-b92a-e017a07b6997" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.148:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 13:20:37.957681 master-0 kubenswrapper[28149]: I0313 13:20:37.957606 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jvzfh/must-gather-j4dc8"] Mar 13 13:20:37.964123 master-0 kubenswrapper[28149]: I0313 13:20:37.964057 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jvzfh/must-gather-j4dc8" Mar 13 13:20:37.968387 master-0 kubenswrapper[28149]: I0313 13:20:37.968336 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jvzfh"/"kube-root-ca.crt" Mar 13 13:20:37.969041 master-0 kubenswrapper[28149]: I0313 13:20:37.968995 28149 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jvzfh"/"openshift-service-ca.crt" Mar 13 13:20:37.975407 master-0 kubenswrapper[28149]: I0313 13:20:37.975338 28149 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jvzfh/must-gather-5vt75"] Mar 13 13:20:37.978589 master-0 kubenswrapper[28149]: I0313 13:20:37.978544 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jvzfh/must-gather-5vt75" Mar 13 13:20:37.990889 master-0 kubenswrapper[28149]: I0313 13:20:37.990839 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jvzfh/must-gather-j4dc8"] Mar 13 13:20:38.005979 master-0 kubenswrapper[28149]: I0313 13:20:38.005895 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jvzfh/must-gather-5vt75"] Mar 13 13:20:38.042558 master-0 kubenswrapper[28149]: I0313 13:20:38.042440 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf007490-2bad-4b56-bbfb-cccb3812f269-must-gather-output\") pod \"must-gather-j4dc8\" (UID: \"bf007490-2bad-4b56-bbfb-cccb3812f269\") " pod="openshift-must-gather-jvzfh/must-gather-j4dc8" Mar 13 13:20:38.042808 master-0 kubenswrapper[28149]: I0313 13:20:38.042653 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gklk\" (UniqueName: \"kubernetes.io/projected/57b46849-c95b-434d-9956-3f6620e63ff1-kube-api-access-6gklk\") pod \"must-gather-5vt75\" (UID: \"57b46849-c95b-434d-9956-3f6620e63ff1\") " pod="openshift-must-gather-jvzfh/must-gather-5vt75" Mar 13 13:20:38.042808 master-0 kubenswrapper[28149]: I0313 13:20:38.042746 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/57b46849-c95b-434d-9956-3f6620e63ff1-must-gather-output\") pod \"must-gather-5vt75\" (UID: \"57b46849-c95b-434d-9956-3f6620e63ff1\") " pod="openshift-must-gather-jvzfh/must-gather-5vt75" Mar 13 13:20:38.042808 master-0 kubenswrapper[28149]: I0313 13:20:38.042781 28149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rsvb\" (UniqueName: \"kubernetes.io/projected/bf007490-2bad-4b56-bbfb-cccb3812f269-kube-api-access-5rsvb\") pod \"must-gather-j4dc8\" (UID: \"bf007490-2bad-4b56-bbfb-cccb3812f269\") " pod="openshift-must-gather-jvzfh/must-gather-j4dc8" Mar 13 13:20:38.145948 master-0 kubenswrapper[28149]: I0313 13:20:38.145362 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/57b46849-c95b-434d-9956-3f6620e63ff1-must-gather-output\") pod \"must-gather-5vt75\" (UID: \"57b46849-c95b-434d-9956-3f6620e63ff1\") " pod="openshift-must-gather-jvzfh/must-gather-5vt75" Mar 13 13:20:38.145948 master-0 kubenswrapper[28149]: I0313 13:20:38.145476 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rsvb\" (UniqueName: \"kubernetes.io/projected/bf007490-2bad-4b56-bbfb-cccb3812f269-kube-api-access-5rsvb\") pod \"must-gather-j4dc8\" (UID: \"bf007490-2bad-4b56-bbfb-cccb3812f269\") " pod="openshift-must-gather-jvzfh/must-gather-j4dc8" Mar 13 13:20:38.145948 master-0 kubenswrapper[28149]: I0313 13:20:38.145595 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf007490-2bad-4b56-bbfb-cccb3812f269-must-gather-output\") pod \"must-gather-j4dc8\" (UID: \"bf007490-2bad-4b56-bbfb-cccb3812f269\") " pod="openshift-must-gather-jvzfh/must-gather-j4dc8" Mar 13 13:20:38.145948 master-0 kubenswrapper[28149]: I0313 13:20:38.145757 28149 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gklk\" (UniqueName: \"kubernetes.io/projected/57b46849-c95b-434d-9956-3f6620e63ff1-kube-api-access-6gklk\") pod \"must-gather-5vt75\" (UID: \"57b46849-c95b-434d-9956-3f6620e63ff1\") " pod="openshift-must-gather-jvzfh/must-gather-5vt75" Mar 13 13:20:38.149511 master-0 kubenswrapper[28149]: I0313 13:20:38.149424 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf007490-2bad-4b56-bbfb-cccb3812f269-must-gather-output\") pod \"must-gather-j4dc8\" (UID: \"bf007490-2bad-4b56-bbfb-cccb3812f269\") " pod="openshift-must-gather-jvzfh/must-gather-j4dc8" Mar 13 13:20:38.152564 master-0 kubenswrapper[28149]: I0313 13:20:38.150595 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/57b46849-c95b-434d-9956-3f6620e63ff1-must-gather-output\") pod \"must-gather-5vt75\" (UID: \"57b46849-c95b-434d-9956-3f6620e63ff1\") " pod="openshift-must-gather-jvzfh/must-gather-5vt75" Mar 13 13:20:38.184167 master-0 kubenswrapper[28149]: I0313 13:20:38.183889 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gklk\" (UniqueName: \"kubernetes.io/projected/57b46849-c95b-434d-9956-3f6620e63ff1-kube-api-access-6gklk\") pod \"must-gather-5vt75\" (UID: \"57b46849-c95b-434d-9956-3f6620e63ff1\") " pod="openshift-must-gather-jvzfh/must-gather-5vt75" Mar 13 13:20:38.190158 master-0 kubenswrapper[28149]: I0313 13:20:38.186725 28149 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rsvb\" (UniqueName: \"kubernetes.io/projected/bf007490-2bad-4b56-bbfb-cccb3812f269-kube-api-access-5rsvb\") pod \"must-gather-j4dc8\" (UID: \"bf007490-2bad-4b56-bbfb-cccb3812f269\") " pod="openshift-must-gather-jvzfh/must-gather-j4dc8" Mar 13 13:20:38.412169 master-0 kubenswrapper[28149]: I0313 13:20:38.411636 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jvzfh/must-gather-j4dc8" Mar 13 13:20:38.412169 master-0 kubenswrapper[28149]: I0313 13:20:38.412044 28149 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jvzfh/must-gather-5vt75" Mar 13 13:20:39.458636 master-0 kubenswrapper[28149]: I0313 13:20:39.458568 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jvzfh/must-gather-5vt75"] Mar 13 13:20:39.489488 master-0 kubenswrapper[28149]: I0313 13:20:39.488370 28149 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 13:20:39.667217 master-0 kubenswrapper[28149]: I0313 13:20:39.667128 28149 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jvzfh/must-gather-j4dc8"] Mar 13 13:20:40.482203 master-0 kubenswrapper[28149]: I0313 13:20:40.482099 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jvzfh/must-gather-j4dc8" event={"ID":"bf007490-2bad-4b56-bbfb-cccb3812f269","Type":"ContainerStarted","Data":"d7fc4b9bb982d389b67f38c390bef26906e87c0f043e4d3ab9dbb024b28b1121"} Mar 13 13:20:40.484389 master-0 kubenswrapper[28149]: I0313 13:20:40.484337 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jvzfh/must-gather-5vt75" event={"ID":"57b46849-c95b-434d-9956-3f6620e63ff1","Type":"ContainerStarted","Data":"8d831b017d0806804b2c557afdad906c841a9e28cefa5cb8c4e633b520c37def"} Mar 13 13:20:41.172830 master-0 kubenswrapper[28149]: I0313 13:20:41.172750 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-nxvs5"] Mar 13 13:20:41.204672 master-0 kubenswrapper[28149]: I0313 13:20:41.204611 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-nxvs5"] Mar 13 13:20:42.531119 master-0 kubenswrapper[28149]: I0313 13:20:42.530467 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jvzfh/must-gather-j4dc8" event={"ID":"bf007490-2bad-4b56-bbfb-cccb3812f269","Type":"ContainerStarted","Data":"2a3e3880bb03772a930916eaa81f38f041e27d84f34481dd30fc2311e87ea954"} Mar 13 13:20:42.717267 master-0 kubenswrapper[28149]: I0313 13:20:42.715401 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab18fda5-1cb5-4875-9daf-045d6e20138e" path="/var/lib/kubelet/pods/ab18fda5-1cb5-4875-9daf-045d6e20138e/volumes" Mar 13 13:20:43.548213 master-0 kubenswrapper[28149]: I0313 13:20:43.548113 28149 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jvzfh/must-gather-j4dc8" event={"ID":"bf007490-2bad-4b56-bbfb-cccb3812f269","Type":"ContainerStarted","Data":"558fd20a2a3a4f65b1a7789f30f930de6e1bb6384b4a4e5aef8700da72f0feca"} Mar 13 13:20:43.938571 master-0 kubenswrapper[28149]: I0313 13:20:43.938403 28149 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jvzfh/must-gather-j4dc8" podStartSLOduration=5.144530876 podStartE2EDuration="6.938375146s" podCreationTimestamp="2026-03-13 13:20:37 +0000 UTC" firstStartedPulling="2026-03-13 13:20:39.685875075 +0000 UTC m=+1613.339340234" lastFinishedPulling="2026-03-13 13:20:41.479719345 +0000 UTC m=+1615.133184504" observedRunningTime="2026-03-13 13:20:43.932803976 +0000 UTC m=+1617.586269135" watchObservedRunningTime="2026-03-13 13:20:43.938375146 +0000 UTC m=+1617.591840305" Mar 13 13:20:44.111246 master-0 kubenswrapper[28149]: I0313 13:20:44.111184 28149 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-tfhs6"] Mar 13 13:20:44.132411 master-0 kubenswrapper[28149]: I0313 13:20:44.131981 28149 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-tfhs6"] Mar 13 13:20:44.717588 master-0 kubenswrapper[28149]: I0313 13:20:44.717516 28149 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2b3d9c8-1d1e-425b-9780-d9ad7b26318a" path="/var/lib/kubelet/pods/e2b3d9c8-1d1e-425b-9780-d9ad7b26318a/volumes" Mar 13 13:20:45.796797 master-0 kubenswrapper[28149]: I0313 13:20:45.796732 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-98tv2_676b054a-e76f-425d-a6ff-3f1bea8b523e/cluster-version-operator/1.log" Mar 13 13:20:46.281177 master-0 kubenswrapper[28149]: I0313 13:20:46.281111 28149 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-98tv2_676b054a-e76f-425d-a6ff-3f1bea8b523e/cluster-version-operator/0.log"